id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
238746435
pes2o/s2orc
v3-fos-license
Antiviral Effect of Nonfunctionalized Gold Nanoparticles against Herpes Simplex Virus Type-1 (HSV-1) and Possible Contribution of Near-Field Interaction Mechanism The antiviral activity of nonfunctionalized gold nanoparticles (AuNPs) against herpes simplex virus type-1 (HSV-1) in vitro was revealed in this study. We found that AuNPs are capable of reducing the cytopathic effect (CPE) of HSV-1 in Vero cells in a dose- and time-dependent manner when used in pretreatment mode. The demonstrated antiviral activity was within the nontoxic concentration range of AuNPs. Interestingly, we noted that nanoparticles with smaller sizes reduced the CPE of HSV-1 more effectively than larger ones. The observed phenomenon can be tentatively explained by the near-field action of nanoparticles at the virus envelope. These results show that AuNPs can be considered as potential candidates for the treatment of HSV-1 infections. Introduction The strong ability of microorganisms and viruses to develop drug resistance is a continuous problem in chemotherapy. Viruses have numerous mechanisms of genetic variations for their survival [1][2][3][4]. The development of viral resistance necessitates new approaches to antiviral drug development. Therefore, improvements of new medicines and methods of antiviral therapy continue. Nanotechnologies provide numerous opportunities for new drug discovery. The fact that nonfunctionalized nanoparticles demonstrate antiviral activity [5][6][7][8][9] is of particular interest, especially because the observed activity probably results from a new mechanism of action. The known experimental facts about the antiviral action of nonfunctionalized nanoparticles [10] and new findings described in this communication allow for hypothesizing that a mechanism based on the near-field interaction between nanoparticles and the virus envelope and/or its components may take place. HSV-1 and herpes simplex virus type-2 (HSV-2, genital herpes) are members of the human Herpesviridae family and the Alphaherpesvirinae subfamily. HSV-1 is one of the most common human infections worldwide, affecting 60-95% of the adult population worldwide [11]. Transmission of both HSV-1 and HSV-2 occurs during close personal contact. Most humans are infected with HSV-1 in childhood or early adolescence and remain latently infected throughout life. Sexual contact is the primary route of HSV-2 transmission. The use of antiviral agents after the establishment of latency will not result in the elimination of the virus although it helps to control the infection. All herpes viruses are enveloped virions with an icosahedral-shaped nucleocapsid containing the doublestranded linear DNA genome. Neurovirulence and latency of HSV have a direct impact on humans and the course of the disease. This, in turn, can result in profound disease and severe neurological sequelae, including HSV encephalitis. HSV evades the host's immune system and establishes persistent infection. During latency, the HSV genome is maintained in the neuronal cells in a repressed state. The viral genome may subsequently become activated and transported via the neuron's axon to the skin, resulting in viral replication and redevelopment of herpetic lesions. HSV incidence and severity have increased over the past decades owing to the increasing number of immunocompromised patients and sexual activities. Several therapeutic options are available for HSV infections with the nucleoside analogs acyclovir (ACV), famciclovir (FCV), and valacyclovir (VCV), which target the viral DNA polymerase [12], in the first line. The guanosine analog ACV remains the gold standard in the treatment of herpes virus infections that exhibit both high selectivity and low toxicity [13]. Incorporation of the triphosphate form into the growing DNA chain by the viral polymerase in place of guanosine triphosphate leads to chain termination and inhibition of viral replication. ACV is used for the systemic treatment of HSV infections including genital and labial herpes. Besides ACV and its prodrug VCV, other nucleoside analogs, FCV and trifluridine, are also used for HSV treatment. However, incomplete suppressive treatment and resistance are serious disadvantages of these drugs. Currently, therapeutic vaccines for HSV-2 patients with genital herpes and the helicase-primase inhibitors are new specific anti-HSV drugs in development [14]. However, there is no drug available that can eliminate a latent infection, and the prolonged clinical use of antivirals in immunocompromised patients may lead to the incidence of treatment failure due to the development of antiviral-resistant virus strains [15]. Recent studies have shown that metal nanoparticles, particularly silver nanoparticles (AgNPs), can be effective against different types of viruses [16,17]. The possibility of applying metal-based nanoparticles, including silver, gold, tin, and zinc oxide, for the treatment of herpesvirus infections was also reported [18][19][20][21]. Among known methods for the treatment of HSV-caused infections, nanogold-based methods represent a promising direction. It was observed that AuNPs possess a high surface density of free electrons that results in inherent optical, electrical, and catalytic properties; as a result, these are widely researched as nanocarriers [22,23]. AuNPs are described as suitable for numerous biosensing functions and applications, including virus detection [24,25]. AuNPs functionalized with sulfonate ligand (MES) are nontoxic and effective against HSV-1 because of inhibition attachment of the virus to the cell surface [18]. The advantage of another, longer sulfonate linker (MOS) was also shown for AuNPs' modification for their next multivalent binding with HSV-2 and further irreversible viral deactivation [26]. The virucidal mechanism for AuNPs modified with MOS compared to a virustatic found for the 2-(N-morpholino)-ethanesulfonic acid (MES) ligand was established. At the same time, no inhibitory activity of nonmodified citrate-coated nanoparticles was detected. Mechanistic studies using AuNPs capped with MES revealed that nanoparticles interfere with the attachment of HSV-1 to the host's cell, viral entry, and cell-to-cell spread, thereby inhibiting viral infection [18]. In general, the main importance is attributed to spatially oriented functional groups anchored on nanoparticle surfaces for their next binding with the virus, and gold nanoparticles themselves serve as a carrier only. However, it has been shown that AuNPs, when unmodified with viral-specific molecules but stabilized with gallic acid, are also effective toward HSV-1 and HSV-2 in a dose-dependent manner [27]. The antiviral activity of nonmodified AuNPs against HSV-1 was demonstrated [28]. Hence, in the case of nonfunctionalized AuNPs, the physical and chemical properties of the AuNPs can be considered as the primary reason for their antiviral activity. To explain nonfunctionalized AuNP antiviral activity based on the physical interaction of the virion with AuNPs, we postulate the contribution of a dispersion forces mechanism. The effect of electric fields induced by inhomogeneous distribution of electric charge around nanoparticles should be considered as the first possible cause and the driving force behind the AuNP antiviral activity. As such properties change with dimension, particle size should be the critical parameter for antiviral activity. Thus, the objective of the present study was to evaluate the effectiveness of unmodified citrate-capped gold nanoparticles against HSV-1 and to check the differences in the CPE of HSV-1 for nanoparticles of different sizes. In addition, we propose a physical mechanism of interaction between nanosized gold and the virus. Most human viruses have a quasi-spherical shape with a linear dimension of about 100 nm. The characteristic dimensions of the nanoparticles are about few nanometers. Thus, we can use the model of a virus that is a spherically shelled solid nanoparticle whose core is characterized by a dielectric constant ε vir and whose shell is characterized by a dielectric constant ε vir shell . The dielectric constant of the nanoparticle is ε p . The domains of the high field and low field indicate the existence of gradients of the local field on the surface of the virus shell. The proteins and glycoproteins on the surface of the virus envelope contain the polar sites [29]. The dipole moments of the polar sites are under the action of the inhomogeneous field. The field may be caused, among others, by daylight, external light illumination, or it can be the field of vacuum fluctuations. It means that the forces acting on the viral surface proteins (ponderomotive forces) arise via F = −P i · ∂E j /∂x i , where P i is the component of the dipolar moment, and E j is the local electric field at the virus envelope. This may block the interaction of viral proteins with cell receptors and penetration of the virus into the cell interior, resulting in an antiviral effect. Moreover, the long-term action of ponderomotive forces on the viral envelope can lead to the destruction of the envelope [6,9]. The effect of the local-field enhancement necessitates the existence of an external field. In normal conditions, the virus-nanoparticle system is situated under daylight action. When an additional field source acts on the system, one can expect some influence of the external light on the virus infection ability; such an additional lighting effect leading to an increase in the antiviral properties of nanoparticles has been observed [8]. Herein, we demonstrate that gold nanoparticles undergo a size-dependent interaction with the virus. Pretreatment of the virus with nanoparticles reduced the CPE of HSV-1 in a Vero cell culture. This phenomenon can be tentatively explained by the nanoparticles' prevention of viral attachment, penetration into cells, and cell-to-cell spread. Gold Nanoparticles' Preparation Colloidal solutions of gold nanoparticles (AuNPs) were synthesized via chemical reduction of tetrachlorauric acid (HAuCl 4 , Merck, Darmstadt, Germany) with trisodium citrate (NaCit, Na 3 C 6 H 5 O 7 , Acros Organics, Geel, Belgium) in aqueous solutions according to the Turkevich method [30,31]. Thus, HAuCl 4 was put into a boiling solution of NaCit used in double or quadruple molar excess, stirred while boiling for 5 min, then cooled at room temperature. A higher concentration of reductant leads to the formation of nanoparticles of a smaller size. The colloids were brought to their final concentrations: C Au = 1.5 × 10 −4 M and C NaCitr = 9 × 10 −4 M. The Particle Size Distribution The AuNPs' size distribution was studied by a laser correlation spectrometer Zeta Sizer Nano S (Malvern Panalytical Ltd., Malvern, UK) equipped with a correlator (Multi Computing Correlator Type 7032 CE) by dynamic light scattering (DLS). A helium-neon laser LGN-111(Polaron, Lviv, Ukraine) was used with an output power of 25 mW and wavelength of 633 nm to irradiate the suspension. The registration and statistical processing of the scattered laser light at angle 173 • from the suspension were performed three times during 120 s at 25 • C. The resulting autocorrelation function was treated with standard computer programs PCS-Size mode v.1.61. The absorption spectra of gold colloids were recorded in the UV-visible region by a spectrophotometer Lambda 35 (Perkin-Elmer, Norwalk, CT, USA) in 1 cm quartz cells. Cells and Virus Vero cells (ATCC CCL-81; American Type Culture Collection, Manassas, VA, USA) derived from the normal kidney epithelial cells of an African green monkey were cultured in Eagle's minimum essential medium (EMEM; Sigma-Aldrich Co. Ltd., Ayrshire, UK). The growth medium was supplemented with 10% heat-inactivated fetal bovine serum (FBS, Sigma-Aldrich Co., St. Louis, MO, USA), 2 mM L-glutamine (Sigma-Aldrich Co., St. Louis, MO, USA), 100 units/mL penicillin G with 100 µg/mL streptomycin (Sigma-Aldrich Co., St. Louis, MO, USA), and amphotericin B 50 µg/mL (Sigma-Aldrich Co., St. Louis, MO, USA). The cells were cultured in a growth medium at 37 • C in a humidified 5% CO 2 environment. The laboratory strain of HSV-1, strain MacIntyre (ATCC VR-539), was propagated and titrated in Vero cells in EMEM supplemented with 2% FBS. For propagation of the virus, confluent monolayers of Vero cells were infected with HSV at a multiplicity of infection (MOI) of 1 plaque-forming unit (PFU) per 100 cells (MOI = 0.01). After two to three days of infection, when the cytopathic effect was visible, the total virus was harvested and titrated by plaque assay. Virus Titration Titration of the viral loads in supernatants was performed by the method of Reed-Muench, and the titer was expressed per mL [32]. For the plaque reduction assay, Vero cells grown in 24-well plates (2 × 10 5 cells/well) were inoculated with HSV-1 as described above. After 48 h of incubation at 37 • C (5% CO 2 ), the cells were fixed with methanol for 15 min and stained with 0.05% methylene blue (Sigma-Aldrich) for 15 min. The HSV-1 plaques were counted under a microscope. Antiviral activity was expressed as the compound concentration required to reduce the number of viral plaques to 50% of the control (virus-infected but untreated). For CPE, inhibitory assays were carried out in confluent Vero cell monolayers (2 × 10 4 cells/well) growing in 96-well plates. The cell cultures were inoculated with HSV-1 as described above. After 48 h incubation at 37 • C in 5% CO 2 , the number of viable cells was determined by the MTT method. We used the Reed-Muench method to calculate TCID50. Antiviral activity was expressed as IC 50 (50% inhibitory concentration), the concentration required to reduce virus-induced cytopathicity by 50% compared to the untreated control. Tenfold dilutions of freeze supernatants were utilized to inoculate Vero cell monolayers in 96-well plates, and infected cells were maintained in culture for 48 h at 37 • C in 5% CO 2 . Each sample was examined in triplicate. Cytotoxicity Assay To check that the AuNPs did not exert toxic effects on cells, the Vero cell monolayers were exposed to AuNP preparations. The number of viable cells was determined using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT; Sigma-Aldrich Co., St. Louis, MO, USA) assay, which is based on the reduction of yellowish MTT to insoluble and dark blue formazan by viable cells. Vero cells were subcultured in 96-well plates at a seeding density of 2 × 10 4 cells/well in EMEM supplemented with 10% FBS, L-glutamine, and antibiotics at 37 • C in a humidified 5% CO 2 environment [33,34]. Confluent monolayers of cells were treated with preparations of AuNPs I and AuNPs II at concentrations of 0.295 and 5.9 µg/mL, respectively (six wells for each concentration). The stabilizer (sodium citrate, SC), AuNPs I, and AuNPs II were diluted 1:5 and 1:100 in EMEM supplemented with 2% FBS, 2 mM L-glutamine, and antibiotics (maintenance medium). The control consisted of Vero cells with no AuNPs and a stabilizer. After 48 h of incubation at 37 • C in 5% CO 2 , the number of viable cells was determined by adding MTT solution (5 mg/mL) to each well. The cells were incubated for a further 2 h at 37 • C in 5% CO 2 . Then, the formazan crystals were dissolved with dimethylsulfoxide (Sigma-Aldrich Co., St. Louis, MO, USA). The absorption values were measured at 550/670 nm using a microplate reader (Benchmark Plus, Bio-Rad Laboratories, Hercules, CA, USA). The reported data represent the percentage of cell viability compared with controls. Cytotoxicity of the compounds is expressed as the 50% cytotoxic concentration (CC 50 ), which is the concentration required to reduce cell growth by 50% compared to untreated controls. Vero cell viability in each well is presented as a percentage of control cells. The CC 50 was calculated by linear regression analysis of the dose-response curves obtained from the data. Antiviral Assay Different methods were used to treat the cell monolayers, to assess the effect of AuNPs on the inhibition of HSV-1 infectivity. The Vero cells were grown in 96-well (2 × 10 4 cells/well) and 24-well plates (2 × 10 5 cells/well) and exposed to a non-toxic concentration of AuNPs and HSV-1 infected. HSV-1 was titrated on a Vero cell line posttreatment with and without AuNPs. Antiviral activity was determined by the difference between the HSV-1 titers in untreated and treated cells. Virus Pretreatment Assay To analyze AuNP influence on viral attachment to host cells, HSV-1 was pre-treated with AuNPs before Vero cell infection. First, the HSV-1 suspension was incubated in the presence of different concentrations of AuNPs for 0 min, 15 min, 1 h, and 4 h, and then added to confluent Vero cell monolayers at a MOI of 0.001 plaque-forming units (PFUs) per cell for 1 h at 37 • C. The treated cells were washed with phosphate-buffered saline (PBS), overlaid with fresh culture medium, and incubated for 48 h. The T0 time point means that AuNPs were added to the virus suspension and confluent Vero cell cultures immediately. Cells were incubated for 48 h at 37 • C in 5% CO 2 and observed under an inverted microscope until typical CPE was visible. CPE was observed at 48 h post-infection (h p.i.). The supernatants from 24-well plates were collected, stored at −20 • C, or titrated as described before. Vero cells in 24-well plates were fixed with methanol, stained with 0.05% methylene blue, and washed with PBS, whereas 96-well plates were treated with MTT [33]. A negative control and virus control were included in each sample plate. Post-Treatment Assay Vero cell monolayers were infected with HSV-1 at a concentration of MOI = 0.001 [26]. After a 1 h of adsorption at 37 • C, the virus inoculum was removed, and the cells were washed three times with PBS to remove the unattached virus. The AuNPs were then added to the inoculum in different dilutions of AuNPs at the following times: 0 h, 2, 4, and 24 h p.i. The T0 time point means that AuNPs and stabilizer were added to Vero cell cultures immediately after adsorption. The cell monolayers were incubated with the compounds for 48 h until the typical CPE was visible. As above, the supernatants after 48 h of incubation were collected, stored at −20 • C, and Vero cells in 24-well plates were fixed with methanol and stained with methylene blue, whereas 96-well plates were treated with MTT. Cryogenic Transmission Electron Microscopy Cryogenic Transmission Electron Microscopy images were obtained using a Tecnai F20 TWIN microscope (FEI Company, Hillsboro, OR, USA) equipped with a field emission gun, operating at an acceleration voltage of 200 kV. Images were recorded on an Eagle 4k HS camera (FEI Company, Hillsboro, OR, USA) and processed with TIA software (FEI Company, Hillsboro, OR, USA). Specimen preparation was done by vitrification of the aqueous solutions on grids with holey carbon film (Quantifoil R 2/2; Quantifoil Micro Tools GmbH, Jena, Germany). Before use, the grids were activated for 15 s in oxygen plasma using a Femto plasma cleaner (Diener Electronic, Ebhausen, Germany). Cryo samples were prepared by applying a droplet (3 µL) of the solution to the grid, blotting with filter paper, and rapid freezing in liquid ethane using a fully automated blotting device Vitrobot Mark IV (FEI Company, Hillsboro, OR, USA). After preparation, the vitrified specimens were kept under liquid nitrogen until they were inserted into a cryo-TEM holder Gatan 626 (Gatan Inc., Pleasanton, CA USA) and analyzed in the TEM at −178 • C. The Cryo-TEM measurements were performed under a contractual service agreement with CMPW PAN in Zabrze, Poland. Statistical Analysis Statistics including the mean and standard deviation (SD) were analyzed with the GraphPad Prism software using a non-parametric unpaired t-test. A p-value of ≤0.05 was considered significant. The data were obtained from two or three independent experiments. Characterization of AuNPs Following calculations discussed in Section 3.3. below, which showed that the highest energy of interaction between the nanoparticle and the virus surface could be expected for the particle size range 5-15 nm, we chose particles of 10 nm (AuNPs I) and 16 nm (AuNPs II) in our study. We prepared samples of AuNP colloids that differed in size according to DLS measurements, namely, on a "number" basis: AuNPs I had an average size of 10 nm, and AuNPs II were 16 nm ( Figure 1). The AuNPs had a typical band of localized surface plasmon resonance in UV-Vis spectra with a maximum at 520 nm and carried a negative charge with zeta potential values of −29 mV (AuNPs I) and −42 mV (AuNPs II). Company, Hillsboro, OR, USA). Specimen preparation was done by vitrification of the aqueous solutions on grids with holey carbon film (Quantifoil R 2/2; Quantifoil Micro Tools GmbH, Jena, Germany). Before use, the grids were activated for 15 s in oxygen plasma using a Femto plasma cleaner (Diener Electronic, Ebhausen, Germany). Cryo samples were prepared by applying a droplet (3 μL) of the solution to the grid, blotting with filter paper, and rapid freezing in liquid ethane using a fully automated blotting device Vitrobot Mark IV (FEI Company, Hillsboro, OR, USA). After preparation, the vitrified specimens were kept under liquid nitrogen until they were inserted into a cryo-TEM holder Gatan 626 (Gatan Inc., Pleasanton, CA USA) and analyzed in the TEM at −178 °C. The Cryo-TEM measurements were performed under a contractual service agreement with CMPW PAN in Zabrze, Poland. Statistical Analysis Statistics including the mean and standard deviation (SD) were analyzed with the GraphPad Prism software using a non-parametric unpaired t-test. A P-value of ≤ 0.05 was considered significant. The data were obtained from two or three independent experiments. Characterization of AuNPs Following calculations discussed in Section 3.3. below, which showed that the highest energy of interaction between the nanoparticle and the virus surface could be expected for the particle size range 5-15 nm, we chose particles of 10 nm (AuNPs I) and 16 nm (AuNPs II) in our study. We prepared samples of AuNP colloids that differed in size according to DLS measurements, namely, on a "number" basis: AuNPs I had an average size of 10 nm, and AuNPs II were 16 nm (Figure 1). The AuNPs had a typical band of localized surface plasmon resonance in UV-Vis spectra with a maximum at 520 nm and carried a negative charge with zeta potential values of −29 mV (AuNPs I) and −42 mV (AuNPs II). For all the experiments below, AuNPs were dissolved in a maintenance medium and used at concentrations of 0.295 and 5.9 μg/mL. We decided to test AuNP antiviral activity For all the experiments below, AuNPs were dissolved in a maintenance medium and used at concentrations of 0.295 and 5.9 µg/mL. We decided to test AuNP antiviral activity at a concentration not higher than 5.9 µg/mL to avoid potential cytotoxicity problems and not lower than 0.295 µg/mL because of the observed moderate antiviral activity of the AuNPs under the conditions studied. Nanoparticle Cytotoxicity To rule out the possibility that the reduction of infectivity was caused by the cellular toxicity of AuNPs, monolayers of Vero cells were incubated with different concentrations (0.295 and 5.9 µg/mL) of each type of AuNP for different time points. The MTT results revealed that AuNP II and the stabilizer did not induce cell death after 48 h incubation at concentrations of up to 5.9 and 62 µg/mL, respectively. Data are representative of three independent experiments, and values are expressed in mean ± SD. The smaller-sized AuNPs I (10 nm) exhibited stronger toxic effects than AuNPs II (16 nm) in the Vero cell cultures. In addition, the toxic effect depended on the concentrations of gold nanoparticles (Figure 2). The percentages of viable cells relative to the control cultures were ca. 71% (AuNP I) and 99% (AuNP II) at 0.295 µg/mL, and 58 and 93%, respectively, at 5.9 µg/mL. The results revealed that cell viability was maintained close to 100% for sodium citrate as a stabilizer. Nanoparticle Cytotoxicity To rule out the possibility that the reduction of infectivity was caused by the cellular toxicity of AuNPs, monolayers of Vero cells were incubated with different concentrations (0.295 and 5.9 μg/mL) of each type of AuNP for different time points. The MTT results revealed that AuNP II and the stabilizer did not induce cell death after 48 h incubation at concentrations of up to 5.9 and 62 μg/mL, respectively. Data are representative of three independent experiments, and values are expressed in mean ± SD. The smaller-sized AuNPs I (10 nm) exhibited stronger toxic effects than AuNPs II (16 nm) in the Vero cell cultures. In addition, the toxic effect depended on the concentrations of gold nanoparticles (Figure 2). The percentages of viable cells relative to the control cultures were ca. 71% (AuNP I) and 99% (AuNP II) at 0.295 μg/mL, and 58 and 93%, respectively, at 5.9 μg/mL. The results revealed that cell viability was maintained close to 100% for sodium citrate as a stabilizer. Nanoparticle Adsorption on the Virus The first stage of interaction between the virus and the nanoparticles is their physical interaction via dispersion forces. This means that owing to fluctuation, the inhomogeneous distribution of electric charge (nanoparticle polarization) arises. This immediately induces the electric field at the opposite object (the other nanoparticles in the case of Van der Waals interaction for the virus in the studied case), which is the reason for the inhomogeneous distribution of electric charge in the other particle (the virus). The interaction between the dipole moments leads to the Van der Waals forces. The reasons for the polarization fluctuation can be thermal fluctuations, the electric field of vacuum fluctuations, external electric field, and other factors [35]. Nanoparticle Adsorption on the Virus The first stage of interaction between the virus and the nanoparticles is their physical interaction via dispersion forces. This means that owing to fluctuation, the inhomogeneous distribution of electric charge (nanoparticle polarization) arises. This immediately induces the electric field at the opposite object (the other nanoparticles in the case of Van der Waals interaction for the virus in the studied case), which is the reason for the inhomogeneous distribution of electric charge in the other particle (the virus). The interaction between the dipole moments leads to the Van der Waals forces. The reasons for the polarization fluctuation can be thermal fluctuations, the electric field of vacuum fluctuations, external electric field, and other factors [35]. Let us consider the spherical Au nanoparticle located near the HSV-I virion. The virion has glycoprotein spikes on its surface [36], and so its uneven surface should be considered. The virion has an icosahedral shape and a much bigger size compared to the nanoparticle. The abovementioned factors allow for simulating the studied system as the spherical homogeneous nanoparticle located close to the nanostructured surface ( Figure 3). Considering the nanoparticle adsorption on the virus surface, we considered different locations of the nanoparticle (see Figure 3, cases 1-3) and calculated the adsorption potential between the nanoparticle and the surface. noparticle. The abovementioned factors allow for simulating the studied system as the spherical homogeneous nanoparticle located close to the nanostructured surface ( Figure 3). Considering the nanoparticle adsorption on the virus surface, we considered different locations of the nanoparticle (see Figure 3, cases 1-3) and calculated the adsorption potential between the nanoparticle and the surface. The interaction potential in the system is presented in the following form, as in [35]: where d is the distance between the nanoparticle center and the top of the virus spike (see Figure 3c) and ( ) F P is the free energy of the system, which is described as in [37]: We need to find the ground state of the system, which means the state with the energy minimum. To find it, we used the Green's function method, which allowed us to present the local field ( ) i i E P via the polarization of the nanoparticle i P and the Green's function of the medium in which the nanoparticle is located: Consequently, Equation (2) can be differentiated by the nanoparticle polarization, which in turn, leads to the definition of the polarization corresponding to the ground state of the system, such as in [38]. Thus, for the calculations, we need to find the Green's function of the system in which the nanoparticle is located. In the studied case, the system comprises two half-spaces with an uneven interface. The Green's function of such a system can be found using the pseudo-vacuum Green's function method [39] as in [40]. It should be noted that the interface has many spikes and considering the effect of all these The interaction potential in the system is presented in the following form, as in [35]: where d is the distance between the nanoparticle center and the top of the virus spike (see Figure 3c) and F(P) is the free energy of the system, which is described as in [37]: We need to find the ground state of the system, which means the state with the energy minimum. To find it, we used the Green's function method, which allowed us to present the local field E i (P i ) via the polarization of the nanoparticle P i and the Green's function of the medium in which the nanoparticle is located: Consequently, Equation (2) can be differentiated by the nanoparticle polarization, which in turn, leads to the definition of the polarization corresponding to the ground state of the system, such as in [38]. Thus, for the calculations, we need to find the Green's function of the system in which the nanoparticle is located. In the studied case, the system comprises two half-spaces with an uneven interface. The Green's function of such a system can be found using the pseudo-vacuum Green's function method [39] as in [40]. It should be noted that the interface has many spikes and considering the effect of all these spikes makes the calculations too complicated. It was shown that the effect of the spikes located far from the nanoparticle is much smaller compared to the effect of the closer ones, so it was omitted [40,41]. Hence, the modeling should be constructed in the following stages: 1. Construction of a model of the virus surface with spikes based on experimental studies of the virus structure. 2. Estimation of the "critical" distance between two spikes based on the effective susceptibility concept and pseudo-vacuum Green's function, as was conducted in [40,41]. In this case, we considered the curved surface with R >> r (R is the virus radius, r is the radius of the spike base) with cylinder spikes on it. These cylinders are the model of the virus spikes. Here, the "critical" distance means that the term in the expression for the effective susceptibility of spike 0 caused by spike 1 located at this "critical" distance is more than 100 times less than other terms. 3. Simplification of the model by elimination of all the spikes located farther than the "critical" distance for simplification of the calculations. Calculation of the adsorption potential of the nanoparticle in different locations is made using this model, as in [40]. For each of the three described cases, we studied the influence of spikes. It was concluded that similar to results in [40], for cases 1 and 3 only, three spikes should be considered: the central one and one from each side. For case 2, four spikes should be considered: two spikes from each side of the nanoparticle in one dimension. For the gold nanoparticles the model of the core-shell spherical nanoparticle was used. The core is gold, the material of nanoparticles. The shell is the stabilizer, which may be described as a thin shell around the gold core. In the calculations we assumed that the shell was homogeneous. The dielectric constant of gold in the calculations was equal to −10.5 + 1.4 i [42]. It should be noted that all interactions were considered in the presence of visible light, so the values were used for the visible light range. The nanoparticle shell was variable as it was formed by the stabilizer molecules; its thickness depends much on the nanoparticle size, stabilizer concentration, and material. For the trisodium citrate the shell thickness was around 0.4-0.7 nm [43], and for the calculations we used the shell thickness value of 0.5 nm and the shell dielectric constant value of 1.3 as in [44]. The virus was described as a spherical structure with a nonhomogeneous shell; the surface had some cylindric spikes (mainly glycoproteins). The radius of the inner part of the virion in the calculations was equal to 60 nm, and the whole shell thickness was taken as 20 nm: 10 nm of the homogeneous layer and 10 nm the height of the cylinders [45]. As the dielectric properties were not known exactly for the viruses, we chose the ones for DNA [46], which was the inner part, and viral proteins and glycoproteins [47], which corresponded to the shells. The results of calculations showed the energy of the interaction between the nanoparticle and the virus surface depending on the distance between them. As shown in Figure 3c the distance "d" changed, but the relative position of the nanoparticle center and the edge of the spike were stable. The minimum of the potential indicated the physical adsorption of the nanoparticle on the virus surface, whereas the depth of the minimum indicated the energy of adsorption. The case with the deepest energy minimum was the most energetically favorable state. Consequently, comparing the potentials for the different relative locations of the virus center and the spike edge, it was seen that the deepest energy minimum and the closest position of the equilibrium system state were for case 1. Similar results were observed for all the nanoparticle sizes considered. However, for the 20 nm nanoparticle these changes were not so obvious. Comparing the potentials for different nanoparticle sizes it could be seen that the deepest minimum was for the smallest nanoparticles. Hence, it may be supposed that the antiviral effect was higher for the smaller nanoparticles, which was indeed observed in our work. From the results of calculations (Figure 4) of the adsorption potential of the systems with different nanoparticle locations, it can be also stated that the nanoparticle adsorption to the virus spike was the most energy-efficient state. This means that the nanoparticle did not penetrate between the spikes and did not get closer to the virus envelope. Hence, based on the calculation results, it can be hypothesized that the nanoparticles were adsorbed on the virus surface uniformly according to their spikes. This may disturb the viral attachment to the cellular receptors and prevent entry to the cell or fusion with its membrane. The process of the nanoparticle adsorption to the HSV-I virion was studied by Cryo-TEM ( Figure 5). It was seen that nanoparticles likely adsorbed to the virus spikes, as was described previously, which indicated that this process was possibly caused by a dispersion interaction between the nanoparticle and the virus. However, an experimental confirmation on the molecular level that nanoparticle adsorption to the virus spike is the most energy-efficient state was a challenge that was not addressed in this preliminary communication. attachment to the cellular receptors and prevent entry to the cell or fusion with its membrane. The process of the nanoparticle adsorption to the HSV-I virion was studied by Cryo-TEM ( Figure 5). It was seen that nanoparticles likely adsorbed to the virus spikes, as was described previously, which indicated that this process was possibly caused by a dispersion interaction between the nanoparticle and the virus. However, an experimental confirmation on the molecular level that nanoparticle adsorption to the virus spike is the most energy-efficient state was a challenge that was not addressed in this preliminary communication. Figure 3; curve 2 corresponds to site 2 in Figure 3; curve 3 corresponds to site 3 in Figure 3). All the results were normalized to the deepest energy minimum corresponding to case '1′ of 5 nm nanoparticles. Antiviral Effect of Nanoparticles Different methods were used to treat the cell monolayers to assess the effect of AuNPs on the inhibition of HSV-1infection: (1) pretreatment assay in which the different AuNPs and HSV-1 were added to the confluent monolayer of Vero cells during or after viral adsorption, and (2) post-treatment assay in which Vero cell monolayers were first infected with HSV-1, and AuNPs were then added to the inoculum at different times. For all treatments, cells infected with HSV-1 were then incubated for 48 h at 37 °C. Figure 3; curve 2 corresponds to site 2 in Figure 3; curve 3 corresponds to site 3 in Figure 3). All the results were normalized to the deepest energy minimum corresponding to case '1 of 5 nm nanoparticles. Cryo-TEM ( Figure 5). It was seen that nanoparticles likely adsorbed to the virus spikes, as was described previously, which indicated that this process was possibly caused by a dispersion interaction between the nanoparticle and the virus. However, an experimental confirmation on the molecular level that nanoparticle adsorption to the virus spike is the most energy-efficient state was a challenge that was not addressed in this preliminary communication. Figure 3; curve 2 corresponds to site 2 in Figure 3; curve 3 corresponds to site 3 in Figure 3). All the results were normalized to the deepest energy minimum corresponding to case '1′ of 5 nm nanoparticles. Antiviral Effect of Nanoparticles Different methods were used to treat the cell monolayers to assess the effect of AuNPs on the inhibition of HSV-1infection: (1) pretreatment assay in which the different AuNPs and HSV-1 were added to the confluent monolayer of Vero cells during or after viral adsorption, and (2) post-treatment assay in which Vero cell monolayers were first infected with HSV-1, and AuNPs were then added to the inoculum at different times. For all treatments, cells infected with HSV-1 were then incubated for 48 h at 37 °C. Antiviral Effect of Nanoparticles Different methods were used to treat the cell monolayers to assess the effect of AuNPs on the inhibition of HSV-1infection: (1) pretreatment assay in which the different AuNPs and HSV-1 were added to the confluent monolayer of Vero cells during or after viral adsorption, and (2) post-treatment assay in which Vero cell monolayers were first infected with HSV-1, and AuNPs were then added to the inoculum at different times. For all treatments, cells infected with HSV-1 were then incubated for 48 h at 37 • C. To test whether AuNPs affected HSV-1 infectivity in vitro, we used a pretreatment assay. HSV-1 suspension was incubated with AuNPs for 0 min, 15 min, 1 h, and 4 h, and then added to the Vero cell monolayers. The dose-dependent efficiency of nanoparticles on viral titers released into cell-free culture supernatants was observed. As shown in Table 1, incubation of virions with AuNPs I reduced the virus replication in a dose-dependent manner. A higher level of inhibition was observed with nanoparticles of 5.9 µg/mL incubated with HSV-1 at 1 or 4 h compared to nanoparticles at a concentration of 0.295 µg/mL. Four-hour pretreatment of the smaller-sized nanoparticles with the virus achieved up to a 100-fold decrease of the HSV-1 load, while bigger-size nanoparticles reduced the viral load twofold, at best, compared to infection control. The viral loads in supernatants collected after 48 h p.i. were determined in confluent Vero cultures and calculated with the Reed-Muench method. AuNPs I pretreated with HSV-1 for 1 or 4 h showed a major antiviral effect at a concentration of 5.9 µg/mL (Table 1). AuNPs I caused up to a 100-fold inhibition of exogenous virus loads. Because inhibition of viral infectivity could be a consequence of the action of AuNPs inside the cell at a post-entry event, we performed a post-treatment test by adding the nanoparticles at two concentrations 0-24 h after viral infection. Post-treatment did not have any measurable effect at the concentrations used, so the nanoparticles were not able to reduce the HSV-1 loads (data not shown). Depending on the cell type there are two main pathways-non-endocytic and endocytic-of HSV entry into host cells and viruses use a similar set of viral surface glycoproteins to enter host cells. HSV attaches through the envelope glycoproteins to receptors on the surface of the host cell. This interaction allows for tight anchoring of the virion particle to the plasma membrane of the host cell and eventually leads to membrane fusion and virus penetration into the host cell. Gold nanoparticles probably block the interaction of the virus with the cell, which might be dependent on the nanoparticle size. The size of the AuNPs may determine the host-pathogen interaction, and smaller nanoparticles may cause higher binding efficiency. It was found that smaller-sized silver nanoparticles (AgNPs) attach to HSV-1, inhibiting the virus from attaching to host cells and ultimately resulting in attenuation of viral replication [48,49]. Halder et al. [27] synthesized gallic acid-stabilized mono-dispersed gold nanoparticles and found that they strongly inhibited the proliferation of HSV-1 infection in Vero cells. It is also possible that gold nanoparticles undergo a size-dependent interaction with HSV-1. These results revealed that AuNPs were capable of controlling viral infectivity, most likely by blocking the interaction of the virus with the cell, which might depend upon the size of the AgNP. Our results revealed that smaller gold nanoparticles with a size of 10 nm had better antiviral activity, although they showed increased toxicity. NP toxicity strongly depends on their physical and chemical properties, including size and shape. It was observed that different human cell types were more sensitive to small gold particles (1.4 nm) than gold particles 15 nm in size [50]. It is possible that smaller-sized AuNPs attach more easily to the viral envelope, resulting in the reduction of HSV-1 replication. Electric field interaction with unmodified gold nanoparticles, or "physical" interaction, could be the first step before the tight "chemical" binding of AuNPs with the viral envelope through spikes built from glycoproteins, namely, through donor-acceptor or covalent bond formation between Au and peptide moieties. Conclusions 1. No cytotoxicity of AuNPs II (16 nm) was observed in the Vero cell line up to a gold concentration of 5.9 µg/mL, while AuNPs I (10 nm) presented a greater cytotoxic effect. 2. Pretreatment, as well as post-treatment, of HSV-1 with AuNPs did not show a significant effect on the Vero cell viability. The reduction in CPE of HSV-1 after four-hour pretreatment of the virus with AuNPs II reached a maximum of 10%. 3. Smaller-sized nanoparticles were able to inhibit the HSV-1 replication in a pretreatment assay. A virus pretreatment assay showed that AuNPs reduced HSV-1 replication in a dose-and time-dependent manner. 4. It may be hypothesized that AuNP adsorption on the virion may disturb the virus attachment to the cellular receptors and prevent entry to the cell or fusion with its membrane. 5. AuNP adsorption to the virion spikes can be explained by their Van der Waals interaction. Outlook Gold nanoparticles with an average size of 10 nm demonstrated inhibitory activities against HSV-1 in a dose-and time-dependent manner at non-cytotoxic concentrations.
2021-10-14T06:24:04.671Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "ba38046e3b11444b78cab117b9040dcb091849c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/19/5960/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "853c472b00d92a7c24ed627e0a1512522a1356c1", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55129372
pes2o/s2orc
v3-fos-license
DEVELOPMENT OF THE METHOD FOR JOINT OPERATION OF NEURAL- NETWORK TUNERS FOR CURRENT AND SPEED CIRCUITS Among the main consumers of energy resources in metallurgical industry we can distinguish rolling production, including the rolling mills. Improving a rolling process at crimping machines is one of the most important challenges solving which would have an impact on both the efficiency of overall production and the quality of rolled products. Even a slight reduction in the energy consumption by electric drives of the rolling mills would reduce the cost of production. In this case, the automation issues are crucial for solving a problem on energy saving in the control systems of an automated electric drive. The relevance of present study is determined by a wide distribution of highpower direct current electric rivers in the rolling production, improving energy efficiency of which even by 1–2 % would bring a significant economic effect. Introduction Among the main consumers of energy resources in metallurgical industry we can distinguish rolling production, including the rolling mills.Improving a rolling process at crimping machines is one of the most important challenges solving which would have an impact on both the efficiency of overall production and the quality of rolled products.Even a slight reduction in the energy consumption by electric drives of the rolling mills would reduce the cost of production.In this case, the automation issues are crucial for solving a problem on energy saving in the control systems of an automated electric drive. The relevance of present study is determined by a wide distribution of high-power direct current electric rivers in the rolling production, improving energy efficiency of which even by 1-2 % would bring a significant economic effect. Literature review and problem statement The subject of present study is the system of control over motion speed of a two-high reversing rolling mill.Roughing rolling stands belong to sophisticated non-linear metallurgical aggregates whose parameters can change during operation [1,2].The systems of control over given units widely employ direct current electric drives and Pand PI-controllers with constant parameters.Such systems are built using the principles of subordinate regulation [3,4], according to which the two control circuits are synthesized: an external circuit to control rotation frequency of the electric motor and the internal circuit to control current of the anchor.Applying the specified linear controllers leads to the deterioration in quality of transition processes under conditions of changing modes of operation [1,2].Specifically, parameters of the armature winding and mechanical part can change in the examined drive.This leads to a decrease in the energy efficiency of the rolling stand's electric drive.A given problem can be solved by adapting the parameters of the employed linear controllers under a real-time mode [5][6][7][8]. In paper [5], based on the conducted analysis of existing methods for tuning linear controllers, the authors propose using methods of indirect adaptation as the most promising approach.However, conducting an identification procedure under industrial conditions is a difficult task.Similar idea has been further developed in other studies.Specifically, in paper [6], authors apply a stepwise test signal for the identification; in article [7] -a signal based on one harmonic, and in [8] -based on two harmonics.The problem is the fact that DEVELOPMENT OF THE METHOD FOR JOINT OPERATION OF NEURAL-NETWORK TUNERS FOR CURRENT AND SPEED CIRCUITS Y . E r e m e n k o Doctor of Technical Sciences, Professor* Е-mail: erem49@mail.ru A . G l u s h c h e n k o PhD, Associate Professor* Е-mail: strondutt@mail.ruAt the same time, the authors have already designed neural-network tuners for current controllers [9] and speed regulators [10], which operate in real time and do not require a model of the control object.Effectiveness of the tuners was tested in situations when only one of them was functioning during experiment.However, in order to improve energy efficiency of the main electric drive of a rolling stand under industrial conditions, it is necessary to employ neural-network tuners simultaneously in the circuits for speed control and anchor current.Direct application of tuners results in a "conflict" in the work logic of rule bases of each tuner.This occurs for the following reason: a change in the parameters of armature winding (which must be compensated for by a current tuner) can, under certain conditions, degrade transition processes in the speed circuit.Such a deterioration triggers a rule base of the speed circuit tuner and consequently, the tuning of the speed regulator.An opposite situation is also possible: a change in the mechanical part of an electric drive can lead to a change in the quality of work of the current circuit and subsequent tuning of the respective controller. The aim and objectives of the study The aim of present study is to devise a method for the joint operation of neural-network tuners in the control circuit of anchor speed and current in real time by designing an appropriate algorithm, which would determine which controller requires tuning. To accomplish the aim, the following tasks have been set: -to develop a rule base that would account for the specificity of joint operation of tuners for current circuit and speed circuit, and to represent its performance in the form of an algorithm; -to test effectiveness of the designed rule base within the framework of a model experiment on the system that includes a neural-network tuner of speed circuit and a neural-network tuner of current circuit for a direct current electric drive. Description of a neural-network tuner We shall consider a structure of the neural-network tuner.It combines a method of the application of neural networks (NN) for the correction of coefficients of the controller and a rule base of situations when such a tuning is required.A given approach makes it possible to eliminate shortcomings of intelligent methods applied separately.Neural networks provide the capability of learning, and a rule base -information on the specificity of control object (permissible ranges of signals, setpoints, a task change form, etc.).The learning speeds of NN neurons act as corollaries of the rules.From a functional point of view, the neural-network tuner is a combination of two interconnected elements: a neural network and a rule base. We propose the following procedure for applying a neural-network tuner in the control circuits of speed and anchor current of the main electric drive of a rolling stand (Fig. 1).In accordance with the method proposed in paper [11], the tuner of speed control circuit employs an artificial neural network with the structure 2-7-1, the tuner of current control circuit -5-14-2.A description of the rule bases of tuners is given in [9]. Development of the algorithm for a joint operation of tuners In the process of joint operation of rule bases of the examined tuners, there may appear conflicts.We shall consider the rules that cause "conflict" situations. 1. Rules for a current tuner: -IF the first peaks in the curves of current and the task are reached, AND the task extremum module equals maximally permissible current, AND an extremum of current exceeds the extremum of task (larger than by 3 %), AND the current curve reached an extremum at a time when the task deviated from its the peak by larger than 20 % for amplitude, it is necessary to increase K P curr . -IF there was no decrease in K P curr during a given transition process, AND the first and second current and task extrema are reached, AND the task extremum is less than the current maximum, AND the current extremum exceeds the task extremum by not larger than 3 %, AND the second current extremum is less than the second task extremum, THEN it is necessary to increase K P curr . 2. Rules for a speed tuner: -IF the overshoot for speed does not reach the required value chosen by the operator, AND the overshoot is greater than zero, THEN it is necessary to increase K P speed . -IF a transition process is completed, AND the speed curve did not reach the task curve, THEN it is necessary to increase K P speed . To distinguish between cases of rule triggering, we designed the algorithm shown in Fig. 2. A call for a neural-network tuner of speed controller is enabled only when two or more transition processes took place in a speed circuit without tuning a speed controller.The tuner of current controller can increase the value of K P curr applying the specified rules only if overshoot in a speed circuit falls within the valid range. Experimental testing of the developed algorithm on a model of the direct current electric drive Experiments were performed on a model of the main electric drive of a two-high rolling stand, designed in MATLAB Simulink.The model represents a two-circuit control system of a DC motor with separate excitation.Controllers of current and speed are configured for the technical optimum (K P curr =0.489;K I curr =13.649;K P speed =1.745).An armature winding model represents an aperiodic link of first order, with rated values of parameters K a nom =41.67 and T e nom =0.036 s.A model of the mechanical part of an electric drive is designed in the form of an integrator with a rated value of the integration time constant J nom =4,798 kg•m². Control objects, which are characterized by large moments of inertia (such as a rolling stand), require a smooth start.This leads to the impossibility of using stepwise task changes because of the likelihood of impermissible surges and high dynamic loads.The most common way to reduce dynamic loads under industrial conditions is the application of setting devices, which are called the intensity setting devices.Given the above, a task signal is implemented using an S-function.A task for speed represents the following sequence of setpoints: 0 rpm (0 V)→60 rpm (4 V)→0 rpm (0 V )→-60 rpm(-4 V).Neural-network tuners are also implemented in the form of S-functions.A more detailed description of the model is given in [9]. In the first experiment (Fig. 3), we simulated a change in the parameters of an electric motor armature windings.K a and T e changed in the range of 80÷120 % of rated values (Fig. 3, f, g).In line with the designed algorithm, we tuned a current controller only (Fig. 3, d, e), and the rule base of the speed tuner was not used. In the second experiment (Fig. 4), the opposite situation was simulated: we changed a moment of inertia of mechanical part of the electric drive J in the range of 50÷150 % from the rated value (Fig. 4, i).During experiment, we tuned the speed circuit only (Fig. 4, c); the rule base of the current circuit tuner was not called.Based on the results of experiments we can conclude that the designed algorithm made it possible to eliminate the "conflict" situations of the current and speed tuners.e -coefficient of integral part of the current controller, f -gain factor of armature winding, g -time constant of armature winding, i -moment of inertia of the electric drive Fig. 5 shows results of the experiment, in which we simultaneously changed parameters of the motor's armature winding (Fig. 5, f, g) and the moment of inertia of mechanical part of the electric drive (Fig. 5, i). Joint application made it possible to reduce energy consumption by the electric drive in the course of experiment, compared with the system without tuning, by 1.9 %.The main advantage of joint application of neural-network tuners in the circuits of speed control and current control is the simultaneous accounting for changes in the parameters of armature link of the motor and mechanical part of the electrical drive.We developed a rule base within the framework of present study, which makes it possible to maintain the order of work of the tuners. The results obtained can be explained in the following way: we revealed two pairs of conflicting rules (chapter 5).In order to avoid their simultaneous triggering, an algorithm was developed that established priorities when calling the tuners.The primary one is tuning a current circuit controller, and only in the case that a given tuner was not called over several transitional processes, there is the possibility to call the tuner of speed circuit. However, it cannot be argued that this method is universal as it was not validated for the circuits of excitation current and emf control, as well as for more complex control systems of electric drives.This is actually the main shortcoming of the proposed method. The result of present study enables simultaneous application of neural-network tuners in the circuits for speed and current control.This will make it possible to estimate the character of changes in the parameters of control object more accurately.The application of tuners could improve the quality of regulation in the system of control over a DC electric drive, which in turn would improve energy efficiency of the entire unit. A limitation on the use of the proposed method is the need to correct values for time delays in the work of neu-ral network tuners for each particular control system over electric drive.In addition, the operation of a neural-network tuner in the speed circuit requires that a map of task in a given circuit should take the stepwise form, or the linearly growing with a restriction. Our study is continuation of a larger study into development and application of a neural-network tuner in the control systems of a direct current electric drive [9,10].The aim of further research is to apply a neural-network tuner for the more complex systems of electric drive control. The alternating current electric drive control systems are much more complicated than the one we considered: they contain a large number of interrelated control loops, and they can utilize relay, hysteresis controllers, switching tables, etc.All this makes the task of applying a neural-network tuner much more difficult, especially in terms of eliminating contradictions in the rule bases of such tuners. Conclusions 1. We analyzed rule bases of neural network tuners for controllers of current and speed circuits for the existence of rules, which can be triggered simultaneously.A rule base of joint operation of tuners is developed in terms of enabling the rules depending on the quality of transition processes in the current and speed circuits, which differs from those existing in that we excluded situations involving simultaneous calling conflicting rules.One of these rules is located in the base of a current circuit tuner, and another one is in the base of a speed circuit tuner. 2. The experiments were performed on a mathematical model of the main electric drive of a rolling stand under conditions of change in the parameters of armature winding and mechanical part of the drive, both in separate experiments and together in one.Control system with two neural-network tuners has improved energy efficiency of work of the electric drive by 1.9 % compared to the system without tuning.Given the high power of the examined drive, even such a reduction in energy consumption would lead to a substantial economic effect and could bring down the cost of the rolled products. Fig. 1 . Fig. 1.Functional diagram of the application of a neural-network tuner: SC -speed controller, CC -current controller, TC -thyristor converter, AW -armature winding, M -mechanical part Fig. 2 .Fig. 3 .Fig. 4 . Fig. 2. Algorithm of joint operation of two neural-network tuners: N -number of transition processes (TP) in the speed circuit during which the tuning of CC was not performed; σ -overshoot in the speed circuit; [σ min ; σ max ] -permissible range of values for the overshoot in the speed circuit Fig. 5 . Fig. 5. Experiment with a drift in the electric drive's moment of inertia and parameters of armature winding: a -rotation speed, b -armature current, c -coefficient of proportional part of the speed controller, d -coefficient of proportional part of the current controller, e -coefficient of integral part of the current controller, f -gain factor of armature winding, g -time constant of armature winding, i -moment of inertia of the electric drive
2018-12-07T05:11:40.045Z
2017-12-08T00:00:00.000
{ "year": 2017, "sha1": "df7abcefb9e697f301a5dc67ef6ed27f7e7766b8", "oa_license": "CCBY", "oa_url": "http://journals.uran.ua/eejet/article/download/117725/114807", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "df7abcefb9e697f301a5dc67ef6ed27f7e7766b8", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220970692
pes2o/s2orc
v3-fos-license
Mechanism of Follicular Helper T Cell Differentiation Regulated by Transcription Factors Helping B cells and antibody responses is a major function of CD4+T helper cells. Follicular helper T (Tfh) cells are identified as a subset of CD4+T helper cells, which is specialized in helping B cells in the germinal center reaction. Tfh cells express high levels of CXCR5, PD-1, IL-21, and other characteristic markers. Accumulating evidence has demonstrated that the dysregulation of Tfh cells is involved in infectious, inflammatory, and autoimmune diseases, including lymphocytic choriomeningitis virus (LCMV) infection, inflammatory bowel disease (IBD), systemic lupus erythematosus (SLE), rheumatoid arthritis (RA), IgG4-related disease (IgG4-RD), Sjögren syndrome (SS), and type 1 diabetes (T1D). Activation of subset-specific transcription factors is the essential step for Tfh cell differentiation. The differentiation of Tfh cells is regulated by a complicated network of transcription factors, including positive factors (Bcl6, ATF-3, Batf, IRF4, c-Maf, and so on) and negative factors (Blimp-1, STAT5, IRF8, Bach2, and so on). The current knowledge underlying the molecular mechanisms of Tfh cell differentiation at the transcriptional level is summarized in this paper, which will provide many perspectives to explore the pathogenesis and treatment of the relevant immune diseases. Introduction CD4 + helper T cells play a critical role in forming and amplifying the abilities of the immune system. Follicular helper T (Tfh) cells are identified as a subset of CD4 + T helper cells, which provides help to B cells for the formation and maintenance of the germinal center (GC) , the production of high affinity class-switched antibodies, long-lived plasma cells, and memory B cells [1]. There were a great deal of researches about Tfh cells in the past 10 years; in particular, the differentiation and function of Tfh cells were involved in a range of diseases including infectious diseases, vaccines, autoimmune diseases, and allergies. Tfh cells are characterized by high expression of the chemokine receptor CXCR5, the transcription factor Bcl6, the costimulatory molecule ICOS, and the coinhibitory molecule PD-1. Once naïve CD4 + T cells are activated by antigen-presenting cells (APCs) together with IL-6 and IL-21, they will differentiate into Tfh cells. A multiple-stage process is involved in the generation of Tfh cells from naïve CD4 + T cells, which consists of initiation, maintenance, and full polarization stages [1]. During the initiation phase of Tfh cell differentiation, multiple signals take part in the process, including transcription factors (Bcl6, Ascl2, Batf, IRF4, c-Maf, and so on), costimulatory molecule(ICOS), and cytokines(IL-6/IL-21); in particular, higher TCR affinity is necessary for initiation of Tfh cell (Bcl6 + CXCR5 + ) differentiation at the phase of dendritic cell priming [2][3][4][5][6][7]. Then, Bcl6 + CXCR5 + Tfh precursor cells move into the T-B border zone, where they accept other differentiation signals from activated B cells [8]. After this appointment, the toughened expression of Bcl6 regulates surface markers, which accelerates the migration of Tfh cells into GC, where they offer assistant signals for B cells [9,10] (Figure 1). Differentiation of naïve CD4 + T cells into Tfh cells is modulated by a multipart transcriptional network ( Figure 2). Multiple transcription factors that either support or oppose the differentiation and function of Tfh cells have been identified (Table 1). Now, the knowledge of the transcriptional mechanism underlying Tfh cell differentiation will be comprehensively described in this paper, which will highlight the possible future directions. Bcl6 and Blimp-1 Bcl6 has been known as a key transcription factor for Tfh cell development by pathways essentially independent of Blimp-1 [3,39]. Bcl6 consists of a zinc finger domain; a bric-a-brac, tramtrack, broad-complex (BTB) domain; and a middle domain [40]. The Bcl6 DNA binding zinc finger domain is essential for Bcl6 activity in CD4 + T cells [8]. The BTB domain of Bcl6 participates in the correct differentiation of Tfh cells most likely by interacting with Bcl6-interacting corepressor (BCOR) [41]. The middle domain of Bcl6 prevents the association with the corepressor metastasisassociated protein 3 (MTA3) and inhibits the differentiation and function of Tfh cells by distressing Prdm1 (encodes Blimp-1) and other crucial target genes [42]. Bcl6 expression is induced by IL-6-STAT1/STAT3 signaling [43], and it is driven very early after T cell activation in a CD28-dependent manner [44]. The E3 ubiquitin ligase Itch is essential for Bcl6 expression at the early stage of Tfh cell development [45]. The deficiency of the Wiskott-Aldrich syndrome protein suppresses Bcl6 transcription, which results in a deficient response of Tfh cells [46]. Research shows that Bcl6 inhibits the IL-7R/STAT5 axis during Tfh cell generation [47]. Bcl6 mediates the effect of activating transcription factor 3 (ATF-3) on Tfh cells in the gut [16]. ATF-3 is a stress-inducible transcription factor and plays a critical role in the prevention of colitis by regulating the development of Tfh cells in the gut. In addition, Bcl6 also suppresses the expression of specific microRNAs that are thought to control the differentiation of Tfh cells, such as miR-17-92 [9] and miR-31 [48]. The miR-17-92 inhibits CXCR5 expression, and miR-31 directly binds to Bcl6 promoter. Blimp-1 has been found to be a critical transcriptional repressor for Tfh cell differentiation. Blimp-1 has the inhibitory effect on Bcl6 expression, indicating that Bcl6 and Blimp-1 are antagonistic regulators in the differentiation of Tfh cells. Blimp-1 is induced by IL-2/STAT5 signaling, and it suppresses the expression of Tfh-associated genes including Bcl6, c-Maf, Batf, CXCR5, and IL-21 [25,26]. Blimp-1deficient CD4 + T cells in mice show enhanced Tfh cell differentiation and GC formation [3,49]. Taken together, these results indicate that Bcl6 is both necessary and sufficient for Tfh cell development and the proper differentiation of Tfh cells in vivo, and the differentiation of Tfh cells requires keeping the expression balance between Bcl6 and Blimp-1. Bcl6 and Blimp-1 are associated with various infectious and autoimmune diseases by regulating Tfh cells. Bcl6 is highly expressed in sinus tissues, parotid gland tissues, and lacrimal gland tissues of IgG4-related disease (IgG4-RD) patients [14]. Blimp-1 in peripheral blood is upregulated in patients with IgG4-RD [14]. Compared with the healthy controls, higher expression of Bcl6 and lower Blimp-1 expression in the peripheral blood are observed in patients with rheumatoid arthritis (RA) [15]. c-Maf and Batf c-Maf and Batf are the members of the activator protein 1 (AP-1) family. c-Maf is a bZIP transcriptional factor , and promotes the differentiation of Tfh cells. [6]. It is highly expressed in Th17 cells and mature Tfh cells. The selective loss of c-Maf expression in Tfh cells results in the downregulated expression of Bcl6, CXCR5, PD-1, and IL-21 [6]. In addition, one study reveals that Bcl6 and c-Maf synergistically orchestrate the expression of Tfh cell-associated genes (PD-1, ICOS, CXCR5, and so on) [4]. Batf is known to control switched antibody responses. Batf is highly expressed in Tfh cells and is essential for the differentiation of Tfh cells through regulating the expression of Bcl6 and c-Maf [50,51]. Batf directly binds to and activates the conserved noncoding sequence 2 (CNS2) region in the IL-4 locus and then triggers the production of IL-4 in Tfh cells [52]. Both c-Maf and Batf are related with immune diseases. Compared with the healthy controls, c-Maf mRNA expression level and percentage of Tfh cells in peripheral blood mononuclear cells (PBMCs) are increased in patients with chronic immune thrombocytopenia (cITP), and they are decreased after the effective treatment [12]. Compared with the healthy controls, Batf in the submandibular glands and affected lymph nodes is markedly increased in patients with IgG4-RD [17]. IRF4 and IRF8 IRF4 and IRF8 belong to the evolutionarily conserved IRF family. IRF4 is expressed in hematopoietic cells and plays pivotal roles in the immune response. It has been acknowledged that the IRF4 locus "senses" the intensity of TCR signaling to determine the expression level of IRF4 [18]. IRF4 plays a critical role in regulating the generation of Tfh cells. In IRF4 -/mice, CD4 + T cells in lymph nodes and Peyer's patches fail to express Bcl6 and other Tfh-related molecules [53]. IL-21 is a key cytokine for the development of Tfh cells [54], and IRF4 regulates the production of IL-21 [55]. Therefore, IL-21 takes part in regulating the differentiation of Tfh cells by IRF4. In wild-type mice, IRF4 can interact with Batf-JUN family protein complexes to form a heterotrimer 2 Journal of Immunology Research that can bind to AP1-IRF4 complexes and regulate Tfh cell differentiation [50,51]. IRF8 plays various and important regulatory roles in the growth, differentiation, and function of immune cells in inflammatory bowel disease (IBD) patients [19]. IRF8 inhibits the differentiation of Tfh cells by directly binding to the promoter region of the IRF4 gene and inhibiting the transcription and activation of IRF4. In contrast, IRF8 defi-ciency significantly enhances IRF4 binding the promoter region of the IL-21 gene and results in the expansion of Tfh cell differentiation in vitro and in vivo [19]. STATs Members of the STAT family including STAT1, STAT3, STAT4, and STAT5 are the important regulators for the [11] Chronic immune thrombocytopenia (cITP) [12] Systemic lupus erythematosus (SLE) [13] IgG4-related disease (IgG4-RD) [ Journal of Immunology Research generation of Tfh cells [43,54]. STAT1 is necessary for IL-6mediated Bcl6 induction during the early differentiation of Tfh cells [43]. STAT3 has been found to be critical for Tfh cell development in a Bcl6-dependent manner [23]. The major STAT3-stimulating cytokines include IL-6, IL-21, IL-12, IL-10, and TGF-β [23,56,57]. Besides, STAT3 regulates Bcl6 expression by cooperating with the Ikaros zinc finger transcription factors Aiolos and Ikaros [58]. TRAF6 inhibits the activation of type I interferon-STAT3 signaling [59]. The latest research clearly shows that T-bet, although mildly inhi-biting early Tfh cell differentiation, mainly plays a crucial and specific supporting role for Tfh cell response by promoting cell proliferation and apoptotic intervention at the endstage effector phase of acute viral challenge [2]. T-bet and STAT4 are coexpressed with Bcl6 to coordinate the production of IL-21 and IFN-γ by Tfh cells and promote the GC response [24]. STAT5 is shown as an inhibitory factor for the differentiation of Tfh cells. Molecular analyses reveal that the activation of the IL-2/STAT5 signaling enhances the expression of Blimp-1 and prevents the binding of STAT3 Krüppel-like factor 2 KLF2 Inhibits the differentiation of Tfh cells Downregulates S1PR1, induces the expression of Blimp-1, miRNA92a-mediated Tfh precursor induction is regulated by PTEN-PI3K-KLF2 signaling Type 1 diabetes (T1D) [37] LCMV infection [38] 5 Journal of Immunology Research to the Bcl6 locus [25], resulting in the decrease of GC and the long-lived antibody responses [26]. Similarly, IL-7dependent activation of STAT5 contributes to Bcl6 repression [60]. The latest research shows that IL-10 suppresses the differentiation of Tfh cells in human and mice by promoting STAT5 phosphorylation [61]. He et al. [62] demonstrate that the secreted protein extracellular matrix protein 1 induced by IL-6 and IL-21 in Tfh cells promotes the differentiation of Tfh cells by downregulating the level of STAT5 phosphorylation and upregulating Bcl6 expression. T-bet and STATs are the important regulators for Tfh cell development in infectious and autoimmune diseases. STAT1 serine-727 phosphorylation (designated STAT1-pS727) plays an important role in promoting Tfh cell responses, leading to systemic lupus erythematosus-(SLE-) associated autoantibody production [21]. Compared with the healthy controls, the expression levels of pSTAT1, pSTAT4, and Tbet in PBMCs are upregulated in patients with SLE [22]. The expression level of pSTAT3 in PBMCs in patients with RA is higher than that in healthy controls [23]. TCF-1 and LEF-1 TCF-1 is expressed in both developing and mature T cells and is essential for initiating and securing the differentiation of Tfh cells [7,63]. TCF-1 directly binds to the Bcl6 transcription start site and Prdm1 5 ′ regulatory regions, which promotes the expression of Bcl6 and represses the expression of Blimp-1 during acute viral infection [7,28,29]. TCF-1 synergistically works with LEF-1 to promote the early differentiation of Tfh cells by the multipronged approach of maintaining the expression of IL-6Ra and gp130, enhancing the expression of ICOS, and promoting the expression of Bcl6 [30]. TOX2 The high-mobility group-(HMG-) box transcription factor TOX2 is selectively expressed in human Tfh cells and regulated by Bcl6 and STAT3 in the initial stage of Tfh cell generation [31]. There is a feed-forward loop centering on TOX2 and Bcl6, which drives Tfh cell development. TOX2 promotes Bcl6 expression by inhibiting IL-2 and/or enhancing IL-6 signaling during Tfh cell development. Furthermore, TOX2 is bound to the sites shared by Batf and IRF4, which suggests that TOX2, Batf, and IRF4 may functionally converge in developing Tfh cells. Ascl2 Ascl2, a basic helix-loop-helix domain-containing transcription factor, is highly expressed in Tfh cells, and its expression may precede Bcl6 expression. The expression of Ascl2 in the spleen is upregulated in sjögren syndrome (SS) model mice compared with control mice [32]. Ascl2 initiates the differentiation of Tfh cells via upregulating CXCR5 and downregulating C-C chemokine receptor 7(CCR7) expression as well as the IL-2 level in T cells in vitro. The Ikappa BNS is highly expressed in Tfh cells and is essential for Ascl2-induced CXCR5 expression during the differentiation of Tfh cells [64]. After activation of the signals related to Tfh cells described above, Ascl2 accelerates T cell migration into the follicles in mice [5]. Acute deletion of Ascl2, as well as inhibition of its function with the Id3 protein, can result in impaired Tfh cell development and GC response [5]. In addition, epigenetic regulations, such as histone modifications, also coordinately control the differentiation and function of Tfh cells along with transcription factors. The Ascl2 locus is marked with the active chromatin marker trimethylated histone H3 lysine 4 (H3K4me3) in Tfh cells, and other transcription factors including Bcl6, Maf, Batf, and IRF4 are uniformly associated with H3K4me3 [65]. Bach2 Bach2 is a negative regulator of Tfh cell differentiation. Bach2 directly represses the expression of Bcl6 by inhibiting Bcl6 promoter activity [34] and negatively regulates CXCR5 expression [34]. Overexpression of Bach2 in Tfh cells inhibits the expression of Bcl6, IL-21, and the coinhibitory receptor TIGIT [34]. The deletion of Bach2 leads to the upregulation of CXCR5 expression and contributes to preferential Tfh cell differentiation [33]. FOXO1 and FOXP1 FOXO1 has been found to negatively regulate the differentiation of Tfh cells through the ICOS-mTORC2-FOXO1 signaling in the early stage of differentiation [66]. FOXO1 regulates the differentiation of Tfh cells by negatively regulating Bcl6. The E3 ubiquitin ligase Itch is essential for the differentiation of Tfh cells. Itch associates with FOXO1 and promotes its ubiquitination and degradation [45] and then positively regulates the differentiation of Tfh cells. FOXP1 negatively regulates the expression of CTLA-4 and IL-21 in activated CD4 + T cells [36]. Naïve CD4 + T cells deficient in the FOXP1 preferentially differentiate into Tfh cells, which results in substantially enhanced GC and antibody responses [67]. In addition, FOXP1-deficient Tfh cells restore the generation of high-affinity Abs when cocultured with high numbers of single clone B cells [36]. KLF2 The transcription factor KLF2 serves to inhibit Tfh cell generation by downregulating sphingosine-1-phosphate receptor (S1PR1). KLF2 deficiency in activated CD4 + T cells contributes to Tfh cell generation, whereas KLF2 overexpression prevents Tfh cell production. KLF2 also induces the expression of Blimp-1 and thereby inhibits the differentiation of Tfh cells [38]. ICOS maintains the phenotype of Tfh cells by downregulating KLF2. KLF2 is identified as a target of miRNA92a in inducting the expression of the human Tfh precursor, and the miRNA92a-mediated Tfh precursor induction is regulated by PTEN-PI3K-KLF2 signaling [37]. 6 Journal of Immunology Research Conclusions Multiple transcription factors have been found to regulate Tfh cell generation. In this paper, the regulatory mechanisms of transcription factors on Tfh cell differentiation are summarized. However, many questions remain to be further investigated. (i) Are there other Tfh-specific transcription factors beyond the abovementioned factors? (ii) How do Tfh-specific transcriptional factors impact epigenetic mechanisms during inducing Tfh cell generation? (iii) What are the factors' stage-specific requirements? (iv) What are the molecular mechanisms contributing to Tfh cell maintenance and memory formation? As summarized in this review, Tfh cell-related transcription factors including Bcl6, IRF4, STAT1/STAT4/STAT5, Tbet, TCF-1, LEF-1, TOX2, Bach2, FOXP1, and KLF2 are all involved in the virus infection. Both Bcl6 and STAT3 play an important role in RA. ="The expression levels of Bcl6, STAT1, STAT4 and T-bet are upregulated in SLE patients. Bcl6, Blimp-1, and Batf are associated with IgG4-RD. Due to the association of Tfh cells with a broad spectrum of diseases, subsequent in-depth investigation of regulatory factors for the differentiation of Tfh cells may provide the potential therapeutic targets for various immune diseases, especially the virus infection, SLE, RA, and IgG4-RD. Conflicts of Interest The authors declare that they have no conflicts of interest.
2020-07-23T09:05:33.862Z
2020-07-20T00:00:00.000
{ "year": 2020, "sha1": "d85ddd33ebe5d50650c535a423df7f58d3561806", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jir/2020/1826587.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60ab27af13d718a6d237afe6dd3f8c1f7dc8458f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
248122022
pes2o/s2orc
v3-fos-license
Application of Duval Pentagon in State Diagnosis of On Load Tap Changer A comparative analysis on the effect of two DGA methods, Duval Pentagon and Duval triangle, in identifying abnormal OLTC was carried out. The corresponding area of Duval Pentagon corresponding to the normal state of OLTC was determined, and Duval Pentagon was used to judge the OLTC of UHV converter transformer. The results show that the Duval Pentagon can identify the normal and abnormal states of on load tap changers. More than 95.5% of the on load tap changers are in the low energy discharge area numbered D1. The identification results of 245 on load tap changer states show that the accuracy of normal state of Duval Pentagon is higher than that of Duval triangle, which can be used as an auxiliary criterion for on load tap changer state diagnosis. Introduction With the rapid development of my country's national economy, the demand for electricity has gradually increased, and the construction of UHV AC and DC projects has also been continuously strengthened, providing a strong national energy development strategy of "West-to-East Power Transmission, North-South Interconnection, and National Interconnection" [1] . As the most critical equipment in the UHV DC converter station, the converter transformer is responsible for the power interconnection of the AC and DC high voltage electrical equipment at the node of the converter station.The start and stop of the converter transformer plays a decisive role in the continuity of energy delivery and the reliability of regional energy security ,therefore, the detection, evaluation and diagnosis of its status, timely detection of problems in operation and timely elimination of defects, are of great significance to the safe and stable operation of the regional power grid and the entire power system [2] . In order to ensure that the converter valve group is maintained in a state of low thermal power consumption, UHV DC transmission usually adopts fixed-angle phase-shift modulation, and the voltage regulation mode is on-load voltage regulation,Therefore, the converter transformer is equipped with an on-load tap switch device(On-Load-Tap-Changer, OLTC) [3] .The abnormal cases of UHV converter transformers in recent years show that compared with other electrical components, the failure rate of the on-load tap-changer is higher [4] , because compared with other stationary components, the mechanical parts of the on-load tap-changer usually in operation, the electrical parts of the on-load tap-changer are usually in an arcing state, and the insulating medium of the on-load tapchanger is usually seriously polluted [5] . At the same time, the failure of the on-load tap changer may cause damage to the entire transformer, causing economic losses that are several times the value of the tap changer. Therefore, carrying out condition monitoring for on-load tap-changers is of positive significance for improving the reliability of UHV converter transformers. The detection and monitoring methods of the on-load tap-changer include dynamic resistance test, dissolved gas analysis in oil, mechanical vibration detection and motor current detection [6] . Among them, the dynamic resistance test can find serious and critical faults, and has the function of state evaluation, but it must be tested for power failure; mechanical vibration detection and motor current detection can only detect the state of the mechanical switching process, and the monitoring interval is not comprehensive [7] . In contrast, the dissolved gas analysis in oil covers all stages of on-load tapchanger status monitoring and has a fault warning function. Dissolved gas analysis (DGA) is a widely used technology for diagnosing the insulation status of oil-filled equipment [8] . The insulating oil in OLTC is cracked due to operation, electrical and overheating failures [9] .The gases produced by the cracking of mineral oil include hydrogen (H 2 ), methane (CH 4 ), ethane (C 2 H 6 ), ethylene (C 2 H 4 ) and acetylene (C 2 H 2 ), etc. The solid insulation damage produces carbon monoxide (CO) and carbon dioxide (CO 2 ). These gases dissolved in the insulating oil of the tap changer .The DGA judgment method is used to analyze the concentration of the abovementioned cracked gas, and comprehensively judge the fault type according to the actual size, proportional relationship, and change trend of the gas concentration. At present, the methods for diagnosis of transformer, casing and OLTC status based on DGA are mainly based on IEEE/IEC standard [10] , Duval triangle method [11] , improved three ratio method [12] , Rogers ratio method, etc. [13] . In addition, BP neural network [14] , support vector machine [15] , extreme learning machine [16] and other classification algorithm fault diagnosis methods have appeared, which provide a reference for OLTC fault diagnosis to a certain extent. However, due to the frequent actions of on-load decomposition switches and the frequent occurrence of abnormalities such as arcs, Therefore, the analysis of dissolved gases in oil is more complicated than that of power transformers. Although traditional methods can identify abnormal defects to a certain extent, but in field applications, since the number of fault samples is relatively small compared to the normal state samples, the use of classification methods such as BP neural network and support vector machine for fault diagnosis will lead to misdiagnosis.In the end, methods such as the Duval triangle method are still needed for further verification, or visual inspection of the equipment disassembly. To this end, this paper proposes to use the Duval pentagon method for OLTC fault diagnosis. This method is based on the traditional Duval triangle method and fully considers the percentage content of five common gas components, and is summarized through the diagnosis results of a large number of power transformer OLCT. This article will use this method to diagnose the OLCT status of the converter transformer, aiming to provide a technical reference for the diagnosis of UHV converter transformer OLTC. DGA judgment method DGA judgment methods are divided into characteristic gas limit and ratio method, Duval triangle method and Duval pentagon method. Among them, characteristic gas limit and ratio are the prerequisites for starting DGA analysis [9][10][11] .The on-load tap-changer will also produce characteristic gas during normal operation, the content and ratio of characteristic gas can be compared horizontally to delineate suspected components. The characteristic gas ratio method used for on-load tap-changers has a high accuracy rate in identifying normal and faulty components, which can greatly reduce the scope of the analysis object; however, the ratio method cannot determine the type of fault and the faulty component. The Duval triangle method is based on the gas ratio method and increases the relationship between the gas content and the fault type; the Duval pentagon method adds hydrogen and ethane on the basis of the Duval triangle method, which has a high degree of confidence in identifying the fault type . Characteristic gas limit and ratio method Compared with transformers, OLTC are often in operation to meet the needs of voltage regulation,Mechanical vibrations and even arcs will occur during the contact between moving and static contacts, while transformers are almost "stationary" except for slight vibrations caused by electromagnetic forces.Therefore, the insulating oil inside the OLTC is more susceptible to cracking and degradation [12] . Compared with transformers, more ethylene and acetylene are dissolved in the insulating oil of the on-load tap-changer. Applicable to OLTC characteristic gas limit method, using ethylene gas as an over-limit warning gas,The gas ratio method uses the gas ratio of C 2 H 4 /C 2 H 2 as the state criterion [13] ,comprehensive gas content and gas ratio determine whether the on-load tap changer enters the attention state or even the alarm state. Table 1 shows the four gas data collected by an OLTC from 2003 to 2006. It can be seen that as the operating time increases, the C 2 H 4 content increases significantly, and the ratio of C 2 H 4 to C 2 H 2 also increases rapidly. Finally, after core lifting treatment, it was found that the moving contact of the OLTC had severe ablation. Duval Triangle Method The Duval triangle method has been widely used in the diagnosis of insulating oil DGA. The method uses C 2 H 2 , C 2 H 4 and CH 4 three hydrocarbon gases. First use the volume percentage of each gas divided by the sum of the three gas volume percentages to calculate the proportion of each gas, and then draw the Duval triangle, and use the proportions of the three gases as the coordinates to draw the ratio points in the Duval triangle [14][15][16] . Table 2. The area where the ratio point falls indicates the type of failure in the equipment. According to the fault type corresponding to each area of the Duval triangle method listed in Table 2, the fault can be judged intuitively. Duval pentagon method In order to solve the problem that the Duval triangle method cannot determine the abnormal hydrogen growth equipment, two gases H 2 and C 2 H 6 are added on the basis of the Duval triangle method to improve the recognition sensitivity of natural deterioration of insulating oil, partial discharge, and low temperature overheating [17] . The Duval pentagon method uses the relative percentages of five hydrocarbon gases H 2 , CH 4 , C 2 H 6 , C 2 H 4 , and C 2 H 2 as the five-digit coordinates to draw a pentagon, and then finds the center point of the pentagon through mathematical calculations, and the center point falls into The area is the corresponding fault type, as shown in Figure 2. The calculation process of the pentagon is as follows [15] : 1) Calculate the relative content (percentage) of the five gases: %H 2 , %CH 4 , %C 2 H 2 , %C 2 H 4 and %C 2 H 6 ; 2) Calculate the vertex coordinates of the pentagon with the relative content of five gases as the vertex according to formula (1) and formula (2); 3) According to formula (3), calculate the area of the pentagon with the relative content of the five gases as the vertex; 4) According to formula (4), calculate the center coordinates of the pentagon with the relative contents of the five gases as the vertices; 5) Draw the center of the pentagon on the pentagon plane according to the center coordinates. In the pentagon shown in Figure 2, there are a total of 6 areas, among which low-energy discharge and normal state share one area, and the fault types represented by each area are shown in Table 3 Comparison of Duval triangle and Duval pentagon In order to compare and analyze the effectiveness of the Duval triangle method and the Duval pentagon method for OLTC status diagnosis, 700 sets of dissolved gas data in the insulating oil of the on-load tap-changer in the operating state are selected as the analysis object, and they are drawn on the triangle and the pentagon respectively. In the shape, the results are shown in Figure 3 and Figure 4. The 700 samples are divided into two clusters in the triangle, one is located in the normal area numbered N, the other is located in the abnormal area labeled T3; the 700 groups of samples are also divided into two clusters in the pentagon, one cluster is located The area numbered D1, another cluster is located in the area numbered T3. The result shows that the pentagon, like the triangle, can separate and identify the dissolved gas samples. The above 700 sets of data are screened according to the level of dissolved gas content, and the screening methods are divided into total screening and incremental screening. Total screening refers to selecting the actual measured value of the dissolved gas volume percentage as the data object, and the screening result is also the actual measured value; Incremental filtering refers to selecting the difference between the measured values of two nearby dissolved gases as the data object. The screening result is also the difference (increment). The measured value of dissolved gas is relatively large and has a small impact on the gas ratio, so it is less affected by low-content gas, but the sensitivity is very low in reflecting the initial minor faults. The increment of dissolved gas is small and has a greater impact on the gas ratio, so it is more affected by low-content gas, but incremental analysis greatly reduces the impact of historical values, and has high sensitivity in reflecting initial minor faults. Since the denominator of the gas ratio in the triangular and pentagonal methods is the sum of part or all of the combustible gas, it is less affected by low-content gas, and the change in dissolved gas can be used to identify defects more accurately. In this paper, incremental screening is selected. 3.1Analysis of incremental data samples Select 245 groups of data objects whose dissolved gas increase is greater than the minimum reliable value (10μL/L) and ethylene and acetylene have increased (>1μL/L). Among them, 20 groups of data objects are confirmed to be faulty after inspection, and this group of data is selected as Typical sample. classified as abnormal groups. The remaining 225 groups of data subjects C 2 H 4 /C 2 H 2 <0.5, are summarized as the normal group. Finally, calculate the triangle coordinates and the pentagonal coordinates of the normal group and the abnormal group respectively, and draw them in the triangle and the pentagon, as shown in Figure 5. In the triangle, the number of normal group objects entering the normal area numbered N is 138 groups; In the abnormal group, half (10 cases) of the subjects entered the T3 high temperature and overheating area, and nearly half (8 cases) of the subjects entered the X3 high-energy discharge area. The reason for the remaining 2 cases entering the N zone is that the three-phase adjustment time is quite different, and there is no ablation of the contacts or damage to the transition resistance. This result shows that when the DGA analysis data is incremental data, the triangle method can realize the identification of the on-load tap changer fault and normal state. In the pentagon, the distribution of objects in the normal group is relatively concentrated, with 212 objects falling into the low-energy discharge (normal) area numbered D1, and 11 objects falling into the high-energy discharge area numbering D2; Objects in the abnormal group have a wide range of distribution, 4 cases of objects fall in the low-energy discharge (normal) area numbered D1, 5 cases of objects fall into the high-energy discharge area numbered D2, and 9 cases of objects fall in the number T3 high temperature and overheated area. This result shows that when the DGA analysis data is incremental data, the pentagon can realize the identification of the fault and normal state of the onload tap changer. See Table 4 and Table 5 for the accuracy of Duval triangle and Duval pentagon method when using incremental data. 8 8 100 D2 17 8 -T2 0 0 -T3 9 12 75.0 D1 152 0 -T2 0 0 -X1 17 0 -T1 2 0 -In the analysis using incremental data, the pentagonal D1 area is selected as the normal area, and the correct rate of the pentagon to determine the normal state of the component is 95.5%. In contrast, the correct rate of the triangle to determine the normal state of the component is 61.3%. The correct rate of the polygonal shape to judge high temperature overheating is 75%, and the correct rate of the triangle to judge high temperature overheating is 83.3%. Analysis of total data samples Select the total amount data corresponding to 225 groups of normal objects and 20 groups of abnormal objects, calculate the triangle coordinates and pentagon coordinates of the two groups of objects, and draw them in the triangle and the pentagon. The results are shown in Figure 6. According to the statistical results in Table 6 and Table 7, the total number of normal objects falls into the two regions of the triangle N and D1, of which 174 are in the N region and 47 are in the D1 region. The correct rate of triangle identification in the normal state is 77.3%. Except for the two abnormal objects, the rest of the abnormal objects fell in the X3 and T3 areas, and the accuracy of the triangle to identify the abnormal state was 90%. In the pentagon, the normal object falls in the D1 area of the pentagon. The number of recognition is 227, and the actual number is 225. The correct rate of identifying the normal state is close to 100%. The abnormal object falls into the D2 and T3 areas, and there are two cases of frequently operating the tap switch to enter D1. The correct rate of identifying the abnormal state is 85%. This shows that using the pentagon method to analyze the total data, the accuracy of correctly identifying the load tap status is better than the triangle method. The above comparison results show that the pentagon can also realize the analysis and judgment of the dissolved gas in the insulating oil of the on-load tap changer. The characteristic point of the normal tap changer is located in the low-energy discharge area numbered D1, and the D1 area is marked as the normal area. In the analysis using total data, the correct rate of the pentagon to judge the normal state of the component is close to 100%, the correct rate of the triangle to judge the normal state of the component is 77.3%, and the correct rate of the pentagon to judge the high temperature and overheat is 100%. The correct rate of discharge is 62.5%. The accuracy rate of the triangle for judging high temperature and overheating is 83.3%, and the accuracy rate for judging high-energy discharge is 100%. Application of Duval Pentagon Method in OLTC Diagnosis of Converter Transformer In order to further verify the effectiveness of the Duval pentagon method in diagnosing OLTC status, a DGA analysis was carried out on the on-load tap changer of the converter transformer of an UHV converter station to determine whether the tap changer is abnormal, Table 8 shows .The 12 on-load tap-changers have been in operation for just one year, so the total amount of dissolved gas in the oil is equal to the increment. The characteristic points of each on-load tap-changer are drawn in a triangle and a pentagon respectively, and the results are shown in Figure 7. Shown. Application of Duval Pentagon Method in OLTC Diagnosis of Converter Transformer In order to further verify the effectiveness of the Duval pentagon method in diagnosing OLTC status, a DGA analysis was carried out on the on-load tap changer of the converter transformer of an UHV converter station to determine whether the tap changer is abnormal, Table 8 shows .The 12 on-load tap-changers have been in operation for just one year, so the total amount of dissolved gas in the oil is equal to the increment. The characteristic points of each on-load tap-changer are drawn in a triangle and a pentagon respectively, and the results are shown in Figure 7. Shown. The data of dissolved gas in oil and the corresponding ratio of the selected 12 tap-changers are shown in Table 8. The C 2 H 4 /C 2 H 2 values of 4 abnormal tap-changers are all greater than 1.8. After endoscopic inspection, there were severe ablation of the internal moving contacts of 3 tap changers, and there was slight creepage at the metal screw of the tap changer switching core,the manufacturer replaced these tap-changers, and no abnormality was found in the remaining one. The result of the Duval pentagon method is tested by the characteristic gas ratio method, but it is different from the Duval triangle method. The actual inspection proves that the Duval pentagon method is superior to the Duval triangle analysis method in the diagnosis of the on-load tap changer status. According to the zoning results of the pentagon method, 7 of all 12 on-load tap-changers are in the low-energy discharge zone numbered D1, and the remaining 5 are in the zone numbered S. Conclusion This paper uses the Duval pentagonal analysis method to analyze the dissolved gas samples in the insulating oil of the on-load tap changer, and compares the judgment effect with the Duval triangle analysis method.The Duval pentagonal analysis method is applied to the UHV converter transformer to diagnose the state of the on-load tap changer. The specific conclusions are as follows: (1)The analysis of 700 sets of operating data shows that the Duval pentagon method can cluster the on-load tap-changers by using the dissolved gas in the oil, and D1 in the pentagon can be regarded as the normal area of the OLTC state. (2)The analysis of 245 sets of typical sample data shows that under the total data or incremental data, the Duval pentagon method can effectively distinguish the normal and abnormal state of OLTC, and it has a higher accuracy rate than the Duval triangle method. (3)The practical application of 12 UHV converter transformer on-load tap-changers shows that the Duval pentagon method is accurate and effective in the diagnosis of the OLTC state.
2022-04-13T20:07:46.961Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "0168dc4dfdf18294b2eeb4ec07b29ad38435fb19", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/2247/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0168dc4dfdf18294b2eeb4ec07b29ad38435fb19", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
64039862
pes2o/s2orc
v3-fos-license
Direct Digital Frequency Synthesizer Designs in MATLAB : This study presents the structure of the Direct Digital Frequency Synthesizers (DDFSs) which have several advantages compared to conventional synthesizers such as high frequency, fast switching speed and low power dissipations. In order to lessen the physical area and power dissipation, ROM compression techniques are applied in designs. Bipartite Table Method (BTM) and Multipartite Table Method (MTM) are utilized in this study because of the fact that they provide high compression rates. By using MTM, the compression rates of 157.54:1, 726.71:1 and 3463.29:1 are obtained at 58.40 dB, 75.30 dB and 84.66 dB SFDR levels, respectively. Introduction Direct Digital Frequency Synthesis is one of the most popular techniques to synthesize frequency for the systems requiring specific frequencies, fast switching, low power dissipation and small silicon area. There are several methods using table based or iterative approaches to implement Direct Digital Frequency Synthesizers (DDFSs). In addition to natural advantages of the approaches, ROM compression techniques are applied to reduce the ROM size. Furthermore, the approaches are enhanced with some modifications offering trade-off between Spurious Free Dynamic Range (SFDR), switching speed and the used silicon area. Direct Digital Frequency Synthesizers (DDFSs) are commonly used in several areas such as defense industry, satellite systems, radars, test and measuring equipments, etc. As distinct from analog or indirect synthesizers, DDFSs provide high frequency resolution, fast switching speed, continuous phase switching, small physical area and low power dissipation [1]. For the last 10 years, works on the frequency synthesizers are focused on minimizing the used area and power dissipation while keeping the spectral purity above an acceptable level [2], [3] and [4]. High frequency synthesizers are also studied with some recently offered approaches [5], [6] and [7]. A DDFS consists of three sub blocks which are phase accumulator, phase to amplitude converter and the digital to analog converter. Figure 1 illustrates the principle parts of a DDFS. Phase Accumulator Phase accumulator is used in DDFS for an adjustable frequency output. It is controlled by an N bit Frequency Tuning Word (FTW). Several frequencies can be obtained by using this digital control word with one clock source. Phase accumulator works as an N bit counter and a digital phase wheel is created. Each of the 2 points on this wheel correspond the amplitude value of the related phase. The increment of the counter is determined by the FTW. Figure 2 illustrates the digital phase wheel idea. The frequency of the output signal is Phase to Amplitude Converter (PAC) There are two main approaches for phase to amplitude conversion in a DDFS. The corresponding amplitude value of the related phase can be obtained by using Look Up Tables (LUTs) or iterative calculations. While table based methods allow operating at higher frequencies, iterative methods provides better spectral purity. Many studies have done about both methods for the last decade [2], [9], [11] and [12]. The works about table based methods have generally aimed to reduce the ROM size and to increase the Spurious Free Dynamic Range (SFDR) which shows the spectral purity of the generated sinusoid. On the other hand, COordniate Rotational Digital Computer (CORDIC) based iterative methods have been proposed to achieve better SFDR levels. Digital to Analog Converter (DAC) DACs are used to convert the digital data taken from the PACs to analog signals. DAC part is one of the most important parts of the DDFS and directly related to its performance. The resolution and sampling frequency of DAC are able to determine the limits of DDFS. While delta-sigma and R-2R type DACs are beneficial when high resolution is required, current steering type DACs have sampling rates up to GSPSs with comparatively lower resolution. The lower resolution comes with higher quantizing errors and this cause a decrement in SFDR but oversampling may lessen the decrement a little. Addition to all these, the AC and DC characteristics of the DAC are also important and need to be considered at the implementation stage [8]. CORDIC Based DDFSs CORDIC is a structure that was proposed by Volder to calculate basic trigonometric functions in 1959 [10]. In CORDIC based algorithms, two-dimension vector rotation idea is used. The vector rotation idea is shown in Figure 3, and the related trigonometric equations are As mentioned earlier, CORDIC is applied as an iterative method to calculate some trigonometric functions. Angle (θ) of rotation is completed after T sub rotations and evaluated as in Eq. (2.4) where δ t is the direction of the rotation. tan θ is chosen as multiple of 2 −1 so that (2.3) is easily applied digitally. The equation can be rearranged as The gain constant is generally applied as the initial end point. P(G, 0) is used instead of P(0, 0) as the initial end point of the vector where = ∏ √ + − [11]. Quadrant compression technique uses the symmetric structure of the sinusoid. Instead of a LUT that stores the sine values between0 − 2π, a LUT that stores the sine values between 0 − π/2 is used. The most significant two bits of the P bit phase word determine the quadrant of the phase wheel and the rest include the phase information. The block diagram of the technique is given in Figure 4. As distinct from the piecewise linear approach, x axis is divided into 2 b larger intervals (b < a) and same slope value used for the adjacent pieces in larger intervals. Thus, the ROM size is efficiently decreased. Figure 5 shows the piecewise linear approach used in BTM. Table Based DDFSs In BTM, there are two tables to store the values required for the interpolation. The y i values are stored in The study of De Caro and his friends shows that the number of TOs is not proportional to SFDR. The study also indicates that SFDR of the DDFS strongly depends on x, y, z numbers. From the point of this view, it is obvious that an optimization is essential to get best results. If the number of TOs is more than two then the optimization requires more complex algorithms and calculations. Moreover, the higher number of TOs does not ensure smaller physical area in compression with two or three TOs for the SFDR levels less than 90 dBc [9]. Thus, this paper is focused on the table based methods with one or two TOs. MATLAB Results There are three main objectives while designing a DDFS. These are maximizing the SFDR, minimizing the ROM size and increasing the maximum operating frequency as much as possible. In this paper, the SFDR and ROM size considerations are investigated. BTM Design As mentioned earlier, BTM uses two different tables to store the sine amplitude information. While one of them is for initial values, the other one is for the offset values. The size of TIV is calculated as R is the amplitude resolution and α is the number of bits that represents TIV values. TO values are represented with β + γ bits and the size is calculated as The sum of TO size and TIV size gives the total ROM size. The compression ratio is evaluated as 3) The P phase information taken from the accumulator includes α and γ bits. The ROM size depends on these bits with some β bits in α bits. Figure 6 shows the decomposition of the phase word. The BTM design results are given in Table 3.1 and allow two significant deductions. Firstly, it is obvious that increasing the length of the phase word does not improve the SFDR and it makes the ROM size bigger. So, determining the best decomposition of the phase word has a great importance. Secondly, increasing the amplitude resolution provides better SFDR but causes a notable increment in the ROM size. MTM Design MTM is originally based on BTM and requires smaller ROM size comparing to it with a negligible decrease in SFDR. It is generally preferred when higher SFDR is required because BTM provides the same SFDR with a very high ROM size. In this part of the paper, some MTM design results are investigated and are compared with BTM. The TIV size calculation in MTM is similar to BTM. Using more than one TO makes the difference in total ROM size. The phase word decomposition is given in Figure 7 where θ i = β i + γ i . The phase word length and the amplitude resolution determine SFDR and ROM size. Some design results are given in Table 3.2. As clearly seen from the table it is possible to reach better SFDR by just rearranging the decomposition of the phase word. The results indicate that minimizing α value allows making total ROM size smaller. It is also clear that increasing P and R values causes a remarkable rise in SFDR. While closer α, i and i+1 values provide the best compression, it is more complicated to offer any condition for the best SFDR. Although there are some algorithms proposed to find out the best decomposition of the phase word for a specific SFDR level, there is not any multi-objective algorithm to optimize both the SFDR and the ROM size. In this paper, any algorithm is not used and the results given in tables show the effect of the parameters. Conclusions The DDFS structure has been investigated in this study. The phase accumulator, phase to amplitude converter and digital to analog converter blocks of the structure has been mentioned in section I. Later on, CORDIC, BTM and MTM based DDFS have been explained in section II. In section III, The main problem of the table based methods is that they need much more physical area. When advanced ROM compression techniques such as MTM are applied to minimize the area, a complex optimization algorithm is required at higher SFDR levels. There are some beneficial optimization algorithms to minimize the ROM size at some target SFDR levels but multiobjective optimization techniques may be considered as a future work to optimize the sampling frequency, ROM size and SFDR.
2019-02-16T14:32:10.495Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "00dece05edd8dd901b38b8bc014cbb150d870090", "oa_license": "CCBYSA", "oa_url": "http://dergipark.org.tr/download/article-file/236925", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d3c7053fa4f560afec3bbe08e0ddcbcf5a6dad98", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
14281994
pes2o/s2orc
v3-fos-license
The association of enzymatic and non-enzymatic antioxidant defense parameters with inflammatory markers in patients with exudative form of age-related macular degeneration There are evidence that oxidative stress and inflammation are involved in the pathogenesis of the age-related macular degeneration (AMD). The aim of this study was to analyze the antioxidant defense parameters and inflammatory markers in patients with exudative form of AMD (eAMD), their mutual correlations and association with the specific forms of AMD. The cross-sectional study, included 75 patients with the eAMD, 31 patients with the early form, and 87 aged-matched control subjects. Significantly lower SOD, TAS and albumin values and higher GR, CRP and IL-6 were found in the eAMD compared to the early form (p<0.05). Significant negative correlations were found between GPx and fibrinogen (r = –0.254), TAS and IL-6 (r = –0.999) and positive correlations between uric acid and CRP (r = 0.292), IL-6 and uric acid (r = 0.398) in the eAMD. A significant association of CRP (OR: 1.16, 95% CI: 1.03–1.32, p = 0.018), fibrinogen (OR: 2.21, 95% CI: 1.14–4.85, p = 0.021), TAS (OR: 7.45, 95% CI: 3.97–14.35, p = 0.0001), albumin (OR: 1.25, 95% CI: 1.11–1.41, p = 0.0001) and uric acid (OR: 1.006, 95% CI: 1.00–1.02, p = 0.003) was found with the eAMD. In conclusion it may be suggested, there is a significant impairment of antioxidant and inflammatory parameter levels in eAMD patients. In addition, significant association exists between the tested inflammatory markers and antioxidant parameters with late-eAMD. Introduction A ge-related macular degeneration (AMD) is the leading cause of irreversible central visual loss among the elderly in the developed countries. The prevalence of early form of AMD is 18% in the population of 65 to 74 years of age, rising to 30% after 74 years of age. (1) Aging is associated with biological changes in the eye, including cumulative oxidative injury. Free radicals are constantly synthesized and involved in a series of toxic effects such as lipid peroxidation, protein and DNA oxidative modifications. (1) Vision loss in AMD occurs through photoreceptor damage in the macula, with abnormalities in the retinal pigment epithelium (RPE) and Bruch's membrane. (2) Excessive exposure to light is associated with age-related macular degeneration. (3) In vivo, lipofuscin granules in the RPE are continually exposed to visible light (400-700 nm) and high oxygen tensions (70 mmHg), ideal conditions for the formation of the reactive oxygen species, with the potential to damage cellular proteins and lipid membranes. It has been hypothesized that the photosensitization reactions may be involved in the development of AMD, via synthesis of the reactive oxygen species such as: superoxide, hydrogen peroxide, and singlet oxygen, which may damage the RPE and Bruch's membrane. (4) Different types of the oxidants produced in cells require that cells have different types of the antioxidant defense parameters. (5,6) The major cellular antioxidants are enzymatic substances such as superoxide dismutase (SOD), Se-dependent glutathione peroxidase (GPx), catalase (CAT), glutathione reductase (GR) and various non-enzymatic low molecular substances such as: glutathione, retinoids, carotenoids, ascorbic acid, vitamin E, albumin, uric acid, bilirubin, transferrin, ceruloplasmin, which act as various scavengers for different types of reactive oxygen species. Aging is associated with higher frequency of several disorders including the atherosclerosis, peripheral vascular disease, coronary artery disease, type 2 diabetes mellitus, dementia, Alzheimer's disease, etc. (7,8) Aging is also characterized by proinflammatory state that contributes to the onset of disability in age-related diseases. Over the last two decades, a prominent role of inflammation in the pathogenesis of AMD has been established. Lipid accumulation in RPE cells combined with oxidative stress over time results in the formation of lipid peroxidation products such as 4-hydroxynonenal (4-HNE), malondialdehyde (MDA), isoprostanes (F 2 -IsoPs), acrolein, hexanoyl-lysine, and oxidatively modified low-density lipoprotein (ox-LDL) which have cytotoxic and pro-inflammatory properties on RPE cells. (9) The aim of this study was to analyze the activities of antioxidant enzymes: SOD, GR, and GPx, along with the non-enzymatic low molecular substances (albumin, uric acid and bilirubin) and the Total Antioxidant Status (TAS) of serum, as well as the acute inflammatory markers (CRP, IL-6 and fibrinogen) in patients with AMD. In addition, the aim was to correlate these parameters in relation to the different stages of AMD defined as the early and exudative-advanced form of AMD, in order to find the possible impact of tested parameters to the development of specific forms of the disease. Materials and Methods Patients. In the cross-sectional study, conducted at the Clinic of Ophthalmology, University of Belgrade, out of 106 patients with the age-related macular degeneration with mean age of 71.3 ± 7.04 years, and 87 age matched control subjects comprising the control group (CG), were included in the study The patients A underwent complete ophthalmological examination including the visual acuity assessment, color fundus photography and fluorescein angiography. They were thoroughly clinically examined and fulfilled questionnaire about their habits including the BMI, physical activity, smoking, etc. One of the pathological hallmarks of AMD is the focal deposition of the extracellular material between the retinal pigment epithelium (RPE) and Bruch's membrane called drusen, visualized as yellow deposits under the retinal pigmented epithelium. The Age-Related Eye Disease Study (AREDS) defined categories based on the exam findings of drusen, atrophy, and neovascularization, these categories are defined as: a) no AMD (fewer than 5 small drusen, <63 μm), b) mild AMD (multiple small drusen or some intermediate sized drusen, 63-124 μm), c) intermediate AMD (extensive intermediate sized drusen, more than one large, >125 μm, or noncentral geographic atrophy), and d) advanced AMD, with two subcategories: central geographic atrophy (also known as "advanced-dryˮ AMD) and choroidal neovascularization (the creation of a new blood vessels in the choroid layer, causing vision loss in one eye) also known as "wet" or advanced-exudative form of AMD. (10) This classification was based on most severely affected eye. Our interest categories were those with advancedexudative AMD, and "early" form of AMD (mild and intermediate). The AMD patients were not receiving any anti-VEGF therapy. The exclusion criteria for patients and control subjects were: the presence of any other ocular neovascular disease such as glaucoma, cataracta, chronic uveitis, intra or extra ocular tumors, or the presence of some systemic disease like rheumatoid arthritis, cardiovascular disease, Thyroid disease inflammatory bowel disease, spondyloarthropathy synovitis, tuberculosis and malignant tumors. The subjects in the control group were recruited from the employees of the Institute of Ophthalmology, CCS, Belgrade, and their relatives, who were without any signs of the acute conditions or maculopathy at the time of the study. All subjects gave their informed consent on participation in the study, and the local Ethics Committee approved this study. This study was performed according to the Declaration of Helsinki. Methods. The blood samples for analysis were taken after 12-14 h of the overnight fast. All laboratory tests were done immediately. Antioxidant parameters SOD, GPx, GR and TAS were determined by commercial tests Randox Ltd. UK, based on spectrophotometer methods, according to Goldstein for SOD, Paglia for GPx, Müler for TAS, and Goldberg for GR. (11)(12)(13)(14) SOD was determined in blood hemolysate, which was obtained by washing of the erythrocytes 4 times with 3 ml of 154 mmol/L NaCl and finally by lysing of the washed erythrocytes with cold deionized water and leaving it in place 15 min at +4°C to complete the hemolysis. GPx was determined in the whole blood sample, which was, just before determination, diluted 41 times by gradual addition of diluent (supplied in the test kit) and double-concentrated Drabkin's reagent. TAS and GR were determined in plasma that was obtained by centrifugation of the Li-heparinized blood 10 min/3,000 rpm. The analytical accuracy and precision were tested according to the manufacturer's protocol, using the control materials provided by the manufacturer. The within run inprecision (CV%) for SOD was 4.7%, for GPx was 4.5%, for GR 3.2% and for TAS was 2.3%, and the between run inprecision for SOD was 5.9%, for GPx 7.3%, for GR 5.2% and for TAS was 4%. CRP was determined by an immunochemical high sensitive (hsCRP) method using the Olympus AU 400 analyzer while fibrinogen was determined by Clauss method on Behring Coagulation System XP analyzer. IL-6 was determined by means of chemiluminescent method on Access (Beckman-Coulter) immunochemical analyzer. CRP and IL-6 were measured in serum, while fibrinogen was determined in citrated plasma. Albumin, uric acid and bilirubin concentration were determined in serum, using standard laboratory methods, on Olympus AU 400 biochemical analyzer. Statistical analysis. Statistical analysis was performed by MedCalc ver. 9.4.2.0 statistical package using the Student's t test, Mann-Whitney U test, Chi-Square, ANOVA and Kruskal-Wallis test. Results were presented as mean ± SD for continuous normally distributed variables, and as median and interquartile range for non-normally distributed data. Spearman's rank and Pearson's correlation test was used to define correlations of the individual parameters between and within tested groups. All statistical tests were two-tailed. P values ≤0.05 were considered statistically significant. Linear regression and logistic regression analysis were used to model the association of the antioxidant and inflammatory markers to the advanced and early form of AMD. Results The values of tested parameters and general information about the subjects were presented in Table 1. According to AREDS classification, out of a total number of AMD patients, 75 had advanced exudative form of the disease- choroidal neovascularization (late AMD), out of whom 53 patients had only one eye and 22 patients had both their eyes affected. The rest of 31 patients had the early form of AMD. Out of a total number of studied AMD patients, 73.4% were females and 26.6% were males. Lower SOD activity (p = 0.043) and TAS concentration (p = 0.0004), and higher GR activity (p = 0.04) were found in the AMD patients with the exudative form of disease compared to the early form of AMD (Table 1). Significantly higher CRP and IL-6 values were found in the same group of patients (p<0.05) compared to the early form. The fibrinogen values were elevated in both tested AMD groups compared to the controls (p = 0.007), but the average values were very similar in both, early and advanced AMD group (p>0.05). The values of non-enzymatic antioxidant parameters differ significantly between the tested groups. Significantly lower albumin values (p<0.001) were obtained in AMD patients compared to the controls, in both early (p = 0.004) and exudative form of AMD (p = 0.05). Lower albumin values were recorded in exudative AMD group compared to the early form (p = 0.04). The uric acid values were also significantly lower in both tested subgroups compared to the control group (p = 0.044 and p = 0.048 respectively). No significant difference of uric acid values was obtained between the two subgroups of AMD patients (p>0.05). No significant difference was obtained between the two tested subgroups regarding the bilirubin values, nor comparing to the control group. A negative correlation was recorded between albumin and CRP (ρ = -0.339, p = 0.022) and between bilirubin and CRP at the borderline value of significance (ρ = -0.185, p = 0.05) in the whole group of AMD patients. It is important to mention that a significant and weak negative correlation was obtained between SOD and the age of patients in the whole group of AMD (r = -0.285, p = 0.012) (Fig. 5), while a stronger correlation was found in the subgroup of exudative AMD (r = -0.405, p = 0.005). Very strong and negative correlation was recorded between the GPx and aging in the exudative form of AMD (r = -0.844, p = 0.017), while GR correlated positively with subject's age in the same group of AMD patients (r=0.844, p = 0.039). Using the logistic regression analysis, a significant association was obtained between occurence of advanced-exudative AMD and CRP values (OR:1. Table 2). Discussion The obtained results support the hypothesis that AMD patients have a significant impairment of antioxidant defense system and inflammatory response in comparison to control group. This study has documented that there is a significant difference in the values of antioxidant defense parameters between early and advanced-late AMD. Late form of AMD is associated with choroidal neovascularization, higher SOD, TAS, albumin and uric acid deficiency and higher degree of inflammation. The parameters of the systemic acute inflammation, CRP and IL-6, were higher in the late form of AMD in relation to these parameters in the early form of AMD. A large number of correlation obtained between the studied parameters indicated a close relationship between the tested parameters, their mutual influence and connections with the pathogenic mechanisms that underlying this disease. There was a synergistic relation of the enzymatic antioxidants such as GPx and SOD in all tested subgroups and an inverse correlation between enzymatic and non-enzymatic antioxidants (GPx and TAS, SOD and TAS, SOD and uric acid) as well. The majority of examined antioxidants were negatively correlated with inflammatory markers (GPx and fibrinogen, TAS and IL-6, SOD and TAS, GPx and TAS, albumin and IL-6) indicated that the increase of inflammation was followed by a decreasing activity of antioxidant enzymes, except for GR and uric acid, whose concentration increased with increasing the degree of inflammation. The increase of IL-6 value occurs with aging, which is documented in many studies (correlation found only in the control group) and is followed by lower activity of GPx. (15,16) We also found a mutual positive correlation between the inflammatory markers in both groups (AMD and CG) (GR and fibrinogen) suggesting that these changes were associated with aging and not related to AMD. Increased inflammation and reduced antioxidant defense system is also a consequence of aging, but chronic inflammation, change of their usual relationship and appearance of some new correlations could be a consequence of disease (AMD). Oxidative stress and oxidative damage play a significant role in several ocular diseases including the age-related macular degeneration, cataract, uveitis, corneal inflammation, keratitis, etc. (16) The toxic effects of reactive oxygen species and other free radicals can be eliminated by specific antioxidant enzymes (SOD, GPx, CAT, etc.) which can help the cell to regain the pro-oxidantantioxidant balance. More severe oxidative stress can cause cell death, apoptosis and necrosis. Behndig et al. (16) reported lower activities of SOD isoenzymes in tears, cornea, sclera, aqueous humor, lens, with the highest activity in the retina. Imamura et al. (17) reported that the lack of SOD 1 (Cu,Zn-SOD) could accelerate agerelated pathological changes in the human retina such as drusen, thickened Bruch's membrane and retina neovascularization. The inflammatory reaction is an important source of the oxygen-free radicals. Large amounts of superoxide radicals are secreted by activated phagocytic leukocytes, and also formed as by-product during biosynthesis of leukotrienes and prostaglandins and formation of lipid peroxides. (18) It has been documented that in the conditions of increased oxidative stress and generation of hydrogen peroxide, the SOD isoforms, EC-SOD and Cu,Zn-SOD, are susceptible to inactivation by blocking the enzyme's active center. (19) On the other hand, a specific aging-related reduction of Cu,Zn-SOD activity in human lens has been previously reported. Therefore, decreased activity of SOD could be a consequence of these activities and important marker of the advanced form of AMD. According to Indo's "modified mitochondrial superoxide theory of oxidative stress", the superoxide generated from mitochondrial plays an important role in oxidative stress related diseases and aging, and that mitochondrial MnSOD is essential antioxidant enzyme for maintenance of cellular resistance to oxidative stress. (20) Using the logistic regression analysis, we showed that there was significant influence of some antioxidant markers (TAS, uric acid, albumin), and some inflammatory markers (CRP and fibrinogen) on the occurrence of advanced-exudative form of AMD. Lower serum albumin concentration was documented by Virgolici et al. (21) in patients with age-related cataract, indicated that, increased oxidative stress and decreased redox albumin capacity were closely associated with development of premature age-related cataract. Venza et al. (22) analyzed the impact of antioxidant enzymes and products of oxidative modification to macromolecules on development of age-related macular degeneration in 308 patients. They concluded that ageing was closely associated with oxidative stress and antioxidant status of tested patients. An inverse relationship of tested oxidant and antioxidant parameters was recorded, and a positive association between the determined antioxidant enzymes as well. Regarding the uric acid, previous epidemiological studies have shown that there was a positive correlation between serum concentrations of this parameter and the development of various diseases such as essential hypertension, metabolic syndrome, diabetes mellitus, etc., but so far, there were no data on how and whether aging influenced the serum uric acid levels.Test results of Horwath-Winter and associates showed that subjects with senile cataract had significantly lower values of this parameter in the aqueous humor and tears than in the serum, (23) and that these values were significantly reduced in the course of the disease.In contrast to our results, Subramani et al. (24) reported higher serum uric acid levels in AMD patients with the neovascular form of AMD (exudative AMD) compared to control group, and in relation to geographic atrophy, although in the overall group of patients with AMD, a mean value of uric acid did not differ significantly from the average values in the control group. It is very important to mentioned the obtained negative correlation of patients age with the antioxidant enzymes (SOD and GPx) and positive correlation between GR and ageing, which is logical, given that SOD and GPx activities tended to decrease, while GR activity tended to rise in AMD patients, especially in the exudative form of disease. Suzuki et al. (18) found within macular photoreceptors and the RPE of normal eyes, increasing quantities of oxidized phospholipids with advanced age. Progressive accumulation of the undigested lipid peroxidation products will stress RPE, which can ultimately induce apoptosis, a well established process in aging and AMD. (25) Oxidatively modified substances can stimulate the expression of gp130, the signal transducing chain of IL-6 receptor family and the secretion of IL-6. (26) IL-6 induces proliferation of VSMC and the release of chemoattractant protein-1 (MCP-1). IL-6 increases the number of platelets in circulation, their modification and the level of fibrinogen and coagulant phase of the clotting mechanism that may lead to pathological thrombosis. (26) Several recent clinical studies suggest a close association between serum CRP and ocular vascular disorders related to AMD. (27)(28)(29) The findings of Nagaoka et al. (30) suggested that detrimental effects of CRP could also affect the ocular circulation and might partially contribute to development of the retinal vascular disease. De Jong et al. (31) showed in the Rotterdam Study, that there was a small, but significant association between CRP (32) demonstrated the trends of the increased risk of disease with the increase of CRP, what was statistically significant for both polypoidal choroidal vasculopathy (PCV) and neovascular AMD. The Rotterdam Study found that elevated baseline levels of high sensitive CRP were associated with the development of early and advanced AMD. (33) However, Klein and colleagues demonstrated no significant association between CRP plasma concentration and AMD or AMD progression in both case-control and prospective studies. (34) In a case-control study using patients recruited from Muenster (Germany), researcher found significantly elevated CRP levels as the degree of AMD severity increased compared with controls, (35) but when cardiovascular risk factors were taken into consideration no statistically significant increases were found with ORs of AMD patients compared with controls. (35) IL-6 is a marker for systemic inflammation. Seddon and colleagues performed a prospective cohort study with the aim to demonstrate if IL-6 could predict progression of AMD. (36) The group found a correlation between the level of IL-6 and chances of AMD progression, and they prooved that elevated IL-6 may serve as marker for progression of AMD. However, Klein and colleagues found no significant association between plasma IL-6 levels and AMD, or AMD progression. (34) Similar to our results, Cohen et al. (37) proved that significant association existed between antioxidant enzymes and AMD. They found that subjects with lower GR (OR: 1.63, 95% IP: 1.0-8.0, p = 0.05) and GPx-a (OR: 1.36, 95% IP: 1.0-2.0), had higher chances to develop AMD compared to subjects with normal enzymes values. Evans (38) went one step further, showing in their study (meta-analysis), that the imbalance between oxidants, and antioxidants which can physiologically occur with aging, can be repair with antioxidant supplementation, which can reduce the risk of developing AMD, and may even reduce the progression of late forms AMD and vision loss (OR: 0.77, 95% CI: 0.62-0.96). Yamato et al. (39) showed that Tempol (4-hydroxy-2,2,6,6tetramethyl piperidine-N-oxyl) drinking could reduce lipid peroxidation and CRP levels and increase the concentration of ascorbic acid in aged-mice compared to the young mice. This study documented that usage of a relatively low concentration (6 mM) of Tempol could have a life-extending effect, because of its impact on chronic inflammation and beneficial effects on the immune and antioxidant system. A case-control study of Lip and colleagues recently found elevated levels of plasma fibrinogen in AMD cases compared to the controls. (40) A case-control analysis from the large Blue-Mountaines Eye Study in Australia detected significantly elevated plasma fibrinogen levels in late AMD patients compared to the controls (p<0.05). (41) The relative risk of late AMD was 6.7 for fibrinogen levels higher than 4.5 g/L (highest quartile) compared to the lowest quartile. (41) In another study using patients recruited from the Muenster Aging and Retina Study population in Muenster, the researches found elevated plasma fibrinogen levels as the degree of AMD severity increased. (34) Schaumberg et al. (42) in his 10 years follow-up study on 275,000 subjects found a higher probability for AMD in subjects with higher hCRP (OR: 3.09, 95% IP: 1.39-6.88, p = 0.02) and fibrinogen values (OR: 2.01, 95% IP: 1.07-3.75) compared to subjects with normal values of these parameters. Age-related impaired function of the antioxidant system has been reported by many studies. Like other age-related degenerative diseases, the incidence of AMD rises exponentially with aging. During aging and pathological conditions, the balance between the ROS generation and ROS clearance can be disturbed due to reduced antioxidant defense system resulting in oxidative damage to macromolecules. Therefore, the therapy with antioxidants and anti-inflammatory substances may be beneficial for improvement of an endothelial function and prevention of AMD.
2018-04-03T04:40:15.916Z
2017-02-08T00:00:00.000
{ "year": 2017, "sha1": "ff2deb1d8ff9106d96df1081b7b9b03689f73893", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/jcbn/60/2/60_16-30/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff2deb1d8ff9106d96df1081b7b9b03689f73893", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221525611
pes2o/s2orc
v3-fos-license
Telemedicine in rheumatology: a reliable approach beyond the pandemic Abstract Objectives The SARS-CoV-2 outbreak has imposed considerable restrictions on people’s mobility, which affects the referral of chronically ill patients to health care structures. The emerging need for alternative ways to follow these patients up is leading to a wide adoption of telemedicine. We aimed to evaluate the feasibility of this approach for our cohort of patients with CTDs, investigating their attitude to adopting telemedicine, even after the pandemic. Methods We conducted a telephonic survey among consecutive patients referred to our CTD outpatients’ clinic, evaluating their capability and propensity for adopting telemedicine and whether they would prefer it over face-to-face evaluation. Demographical and occupational factors were also collected, and their influence on the answers has been evaluated by a multivariate analysis. Results A total of 175 patients answered our survey (M/F = 28/147), with a median age of 62.5 years [interquartile range (IQR) 53–73]. About 80% of patients owned a device allowing video-calls, and 86% would be able to perform a tele-visit, either alone (50%) or with the help of a relative (36%). Telemedicine was considered acceptable by 78% of patients and 61% would prefer it. Distance from the hospital and patient’s educational level were the strongest predictive factors for the acceptance of telemedicine (P < 0.05), whereas age only affected the mastering of required skills (P < 0.001). Conclusion Telemedicine is a viable approach to be considered for routine follow-up of chronic patients, even beyond the pandemic. Our data showed that older patients would be willing to use this approach, although a proper guide for them would be required. Introduction In February 2020, the new coronavirus SARS-CoV-2, causative agent of the coronavirus disease (COVID-19), was first identified in Europe, initially affecting regions of northern Italy [1]. In view of the pandemic, the Italian Government introduced restrictive rules regarding free movement and assembly of citizens [2]. Since many hospitals have become almost entirely dedicated to COVID-19 patients, access to outpatient clinics has been severely restricted, leading to difficulty in providing correct management and follow-up of chronic disease patients, in particular those undergoing immunosuppressive treatment. Telemedicine is emerging as a useful tool during this pandemic for the assessment of COVID-19 patients []. Since the outbreak, the use of telemedicine has been implemented for the early detection and appropriate Rheumatology key messages . Most of our CTD patients would accept evaluation through telemedicine. . Distance from the hospital and educational level are predictive of telemedicine acceptance. . Telemedicine may be a viable tool for reducing the burden of in-person outpatient visits. management of COVID-19 cases among patients suffering from chronic conditions, as well as for routine follow-up of these patients [3,4]. So far, the use of telemedicine in rheumatology has been very limited, and only a few experiences have been reported [5, ]. However, this outbreak has forced us to face different issues in the follow-up of chronically ill patients, which needs to be performed under safe conditions for both clinicians and patients. To this end, in agreement with Associazione Lombarda Malati Reumatici (ALOMAR), the rheumatic patients' association acting in Lombardy, Italy, we conducted a preliminary survey beginning 6 April among patients referred to our CTD outpatient clinic aimed at evaluating the feasibility of a telemedicine program [6]. Methods On 2-9 March 2020, we interviewed consecutive patients who had a visit scheduled between 15 March and 3 April 2020 at the CTD outpatient clinic of the IRCCS Policlinico San Matteo of Pavia, a third-level rheumatology centre in Lombardy, Italy. We could not obtain written informed consent from our patients because of the lockdown; therefore, all contacted patients who agreed to participate in this were asked to provide oral consent, as approved by the Institutional Review Board of our institution. The questions in the survey were drafted taking into consideration the tools offered by our telemedicine platform, which allows us to perform video-calls and to share files (prescriptions, medical records, laboratory test results) in real-time during the consultation and shortly afterwards. Demographics and occupational information were collected, along with the answers to the following pre-selected questions: 1. Is a personal computer available at home? 2. Do you have a smartphone? 3. Do you have an internet connection? 4. Do you have an e-mail address regularly in use? 5. Would you be able to upload your medical documentation in a shared folder? 6. If not able to by yourself, is there anybody who could help you? 7. Would you be willing to be examined by a doctor in telemedicine even after the pandemic? 8. Would you prefer to be examined by a doctor in telemedicine even after the pandemic? Data were reported as absolute numbers for continuous variables and percentages for categorical variables. Numerical variables were described using median and IQR. Binary logistic regression was conducted for the multivariate analysis. Three models were analysed with respect to attitude to telemedicine, preference over face-to-face evaluation and capacity to upload files. Statistical analysis was performed with SPSS for Mac (v11). Results We contacted 200 patients but ultimately reached a total of 175 patients, with an 88% answer rate. We interviewed 147 females (84%) and 28 males (16%). The median age was 62.5 (IQR 53-73). Patients were followed up at our outpatient clinic for SSc (n ¼ 69; 39%), SLE (n ¼ 49, 28%), idiopathic inflammatory myopathies (n ¼ 31, 18%), SS (n ¼ 3, 2%), and UCTD (n ¼ 23, 13%). More than a third of patients had a high school degree (n ¼ 66, 38%), a similar number (n ¼ 52, 30%) had a secondary school degree, a smaller proportion a primary school degree (n ¼ 35; 20%) and 16 (9%) patients were college graduates. Data from 6 patients were undisclosed (3%). Almost half of our patients were retired (n ¼ 85; 49%), 65 patients (37%) had stable employment, 21 (12%) were unemployed and 4 were students (2%). The patient's residence was at a distance <50 km from the hospital in the majority of cases (125, 71.4%). Survey answers organized according to the age range of patients are reported in Table 1. Most patients owned a device on which they were able to perform the telemedicine assessment, either a personal computer or a smartphone (n ¼ 140, 80%), and 77% (n ¼ 134) routinely use emails. However, half of our patients would not be able to upload documents by themselves (n ¼ 88, 50%); of these, 72% of them (n ¼ 63) would rely on the help of a relative. On that basis, a total of 151 patients (86%) were potentially able to complete a telemedicine evaluation. The majority of these patients (n ¼ 137, 78%) would be willing to perform some of their routine visits in telemedicine even after the pandemic, and 107 (61%) would prefer it over an in-person visit. In multivariate analysis, the attitude towards having a consultation via telemedicine was correlated with the distance from the hospital [odds ratio (OR) 4.7, 95% CI: 1.38, 15.94, P ¼ 0.01] and with level of education (OR 0.17, 0.03-0.86, P ¼ 0.01). Indeed, college graduates would accept telemedicine in 87% of cases, whereas patients with an elementary degree would only accept telemedicine in 54% of cases. Education but not distance influenced the survey responses regarding preference of telemedicine over an in-person visit (88% vs 34%, OR 0.14, 0.02-0.89, P ¼ 0.04). In neither case (predisposition or preference for telemedicine), did age emerge as a significant predictive factor (see Table 2). A diagnosis of UCTD emerged as a deterrent from telemedicine vs all the other diagnosis (OR 0.29, 0.09-0.94, P ¼ 0.04). As expected, the mastery of the skills required for the use of a virtual platform was inversely correlated with age (OR 0.85, 95% CI: 0.79, 0.91, P < 0.001) and with a low level of education (OR 0.17, 95% CI: 0.08-0.37, P < 0.001). Discussion The limited access to routine care due to the outbreak of COVID-19 produced great concerns in patients affected by chronic diseases, forced home by national lockdown and fear of catching the infection in at-risk environments such as hospitals and medical care facilities in general. Restriction rules protected patients from unnecessary risk of getting infected, such as during the use of public transport or attendance at health-care facilities, but it represented an objective obstacle for undergoing their routine assessments. In order to cope with this situation, following previous experiences in rheumatologic settings [5,[7][8][9], we identified telemedicine as one of the possible options to be offered to our patients suffering from chronic inflammatory diseases. We have tested out the possibility of performing a thorough tele-rheumatological assessment through visual interaction and actual data and prescription sharing over a simple phone consultation or email exchange. This may help explain the high rate of acceptance by our patients. Our survey data show that in a cohort of consecutive rheumatologic CTD outpatients, the vast majority of them are capable (86%) and also willing to accept (78%) telemedicine as a method of participating in their routine consultations. In addition, in more than a half of cases (61%) they would even prefer it over a routine face-to-face assessment after the pandemic. Surprisingly, in our survey, population age was not a predictive factor for the acceptance of telemedicine nor for its preference, which is a critical factor in rheumatological cohorts with a generally low prevalence of youngsters. In both multivariate analyses, for either predisposition to or preference for telemedicine, a relevant factor impacting the outcome was the patients' education level, a college degree being a significant factor favouring a positive response. As expected, patients residing >50 km away from the hospital had a more favourable opinion of telemedicine, which is an issue that should be particularly taken into account for tertiary referral centers, often accessed by patients living in other regions. When evaluating patients' preferences for telemedicine, different disease diagnoses was found to have a significant effect on the results. Patients with UCTD were less likely than patients with SSc, SLE, idiopathic inflammatory myopathies or SSj to prefer telemedicine, a factor possibly related to a perception of their clinical picture as having an evolving nature. However, although obtained in a very specific setting such as our CTD outpatient clinic, we think that these findings could be generalizable to other conditions, such as RA or OA. However, additional studies are probably needed to assess telemedicine in those patients with symptoms linked to anxiety or depression, as in the case of fibromyalgia. When exploring the requisites for efficiently using the platform (digital device/email possession and file upload), older patients were less skilled, but in most cases they were efficiently supported by external help and this issue did not affect their responses about telemedicine. A more direct involvement of PARE societies could be a valuable resource for improving the applicability of this approach in this group of patients. Although our survey showed that in 2020 our patients were ready for (and sometimes preferred) telemedicine, several issues concerning the respect of patients' privacy should be evaluated when a similar process of care is applied. In fact, both confidentiality of patients' information and security of data exchange should be ensured. We have tested the attitude of patient to a video-consultation experience, not just a phone consultation, to ensure visual contact between the patient and his/her physician. It is equally critical to make the sharing of documents and prescription easy and safe. The widespread use of internet, social media, e-mails and smartphones, although essential for telemedicine implementation, could induce incorrect behaviours, thus driving the need for a strict regulation of teleconsultations. The outbreak of the SARS-CoV-2 pandemic requires a global change of perspective and a gain of awareness about these issues. Rheumatologists should gain competence in providing this kind of service and comprehension of the value of virtual communications through safe channels [10]. This outbreak has already changed and will continue to change our social behaviour, and as a consequence our approach to the daily practice. We believe the use of telemedicine is a valuable tool we should consider. So far, the easy access to rheumatology care in the majority of European countries has limited the need and advantages of telemedicine over traditional consultations. Patients' reduced direct access to outpatient clinics, in addition to its actual role in reducing the risk of SARS-CoV-2 spreading, may also be an important tool for easing the economic burden of chronic diseases, a matter not to be ignored in the economic crisis we will face due to COVID-19. Telemedicine could be useful in reducing some indirect costs of the rheumatic diseases, like the loss of working days for patients and/or caregivers [11,12], and on the other hand it may favour a more rapid access to the clinic for those urgent and acute conditions that cannot be addressed by telemedicine alone [13]. Moreover, the use of other available technology resources may increase the continuity of patient monitoring through the use of dedicated apps, and the ability to report day-by-day conditions, symptoms and signs [14]. The steadily increasing penetration of smartphones may facilitate telemedicine and the integration of new technologies in the regular follow-up of patients. However, although a link between platform visit and smartphine-based apps is desirable due to increased user friendliness [15], privacy and data security must always be ensured. Our experience is surely encouraging and indicates that people are ready to change their approach to routine care. Telemedicine is proving itself as a valuable aid in routine medical care over this period, but it should be recognized that it cannot replace in-person evaluation. For our CTD patients with stable disease, we believe that a yearly in-person evaluation should be performed, in order to assess the possible presence of new physical signs. A careful evaluation of the pertinence of this approach must be carried out by clinicians, assessing on a case-by-case basis what the best strategy would be for every patient.
2020-08-06T09:05:03.901Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "38e3d98482827bc624bef4635b8376d68fa2efff", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7499691", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "302fb07ddd0bea415c9c4b301a2bdc13ddb19c65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246762584
pes2o/s2orc
v3-fos-license
Hands-Free Authentication for Virtual Assistants with Trusted IoT Device and Machine Learning Virtual assistants, deployed on smartphone and smart speaker devices, enable hands-free financial transactions by voice commands. Even though these voice transactions are frictionless for end users, they are susceptible to typical attacks to authentication protocols (e.g., replay). Using traditional knowledge-based or possession-based authentication with additional invasive interactions raises users concerns regarding security and usefulness. State-of-the-art schemes for trusted devices with physical unclonable functions (PUF) have complex enrollment processes. We propose a scheme based on a challenge response protocol with a trusted Internet of Things (IoT) autonomous device for hands-free scenarios (i.e., with no additional user interaction), integrated with smart home behavior for continuous authentication. The protocol was validated with automatic formal security analysis. A proof of concept with websockets presented an average response time of 383 ms for mutual authentication using a 6-message protocol with a simple enrollment process. We performed hands-free activity recognition of a specific user, based on smart home testbed data from a 2-month period, obtaining an accuracy of 97% and a recall of 81%. Given the data minimization privacy principle, we could reduce the total number of smart home events time series from 7 to 5. When compared with existing invasive solutions, our non-invasive mechanism contributes to the efforts to enhance the usability of financial institutions’ virtual assistants, while maintaining security and privacy. Introduction Security is one of relevant emerging challenges for the Internet of Things [1][2][3]. Security attacks in daily life [4] raises user concerns about the technology maturity. The demand for resiliency against cyber attacks faced by IoT devices reveals resource limitations (e.g., energy consumption, memory, and processing), which inhibit the use of existing asymmetric cryptography solutions [5]. Major banks worldwide offer online banking to their customers to reduce costs and improve convenience of use. Online banking makes it possible for customers to check their balances and perform many financial transactions anywhere, anytime. However, emerging cybersecurity attacks make the reliance upon single-factor authentication (e.g., username/password) a growing concern for banks. By strengthening their authentication mechanisms, banks can effectively protect the confidentiality and integrity of sensitive customer data, thus avoiding financial loss and reputation damage resulting from events such as fraud and customer data disclosure [6]. A plethora of attacks impose a threat to IoT systems. Considering voice-triggered financial transactions, attacks such as impersonation, replay, speech synthesis, and voice conversion are relevant, as illustrated in Figure 1. In an impersonation attack, the adversary is a human being that tries to impersonate a genuine user's voice; in a replay attack, prerecorded audio is played in a compromised speaker; speech synthesis is used by knowledgeable adversaries to generate artificial speech attacks; finally, voice conversion attacks take a step further, by trying to model a specific user's voice using statistical techniques [7]. Multi-factor solutions based on wearables and voice recognition: Feng et al. [32] proposed VAuth, an authentication scheme for voice assistants. It is designed as a wearable security token to provide an additional channel for physical access control. It detects body surface vibrations with an accelerometer in eyeglasses, sends it by Bluetooth connection to the mobile device, and the mobile device receives the wearable data and matches it with the voice command from its microphone. This solution is resilient to replay and impersonation attacks and incurs low latency, with an average of 300 ms overhead, and it achieved 97% detection accuracy with 18 real users. However, the proposed solution enables authentication only when users are performing voice commands, not in a continuous way. A summary of related work is presented in Table 1. None of the listed studies combine trusted device and behavior factors to perform user authentication in a non-invasive manner with a simple enrollment process. [27] No Yes No Yes Complex 20 95% 1000 PALOT [29] No No Yes No Complex 24 70% -REVOLT [30] No Yes Yes No Complex 10 97% 1100 Wivo [31] No Among the presented solutions, the ones which present accuracy and response time results are EarEcho, REVOLT, Wivo, and VAuth. There are no solutions that combine trusted device and behavior authentication factors, and none that do not require a wearable device or additional interaction other than the original voice command (i.e., non-invasive); moreover, none have a simple enrollment process. Solutions such as UCFL or PALOT may present results with more users or with a lower response time, but they are single-factor solutions. The multi-factor solutions REVOLT and Wivo have a complex enrollment process, because they are based on voice biometrics, and VAuth uses a wearable device, which we consider an invasive solution for the hands-free voice transactions scenario considered. Usability A user's goal when using an information system is to perform an intended task, and the authentication is the function that enables that only legitimate users perform this task using the associated system. However, the authentication procedure could be viewed as a laborious process that stands between users and their intended task, from a user's perspective. Effective authentication design and implementation must consider usability by making it easy for legitimate users to carry out the right procedure, hard to carry out the wrong procedure, and easy to recover if a problem arises. Poor usability often results in coping mechanisms that can degrade the effectiveness of security controls [33]. User authentication must be secure but also convenient and easy to deploy and use to be widely accepted. It is possible to have several authentication schemes, as long as they are complementary and do not detract from usability. Different approaches are appropriate for distinct scenarios: speed might be prioritized for device unlocking, and memorability might be prioritized for fallback authentication, for example [34]. Considering the scope of this work, the goal of hands-free interactions is to support users in a convenient and frictionless way to carry out their daily tasks; thus, integrating a secure but invasive scheme may undermine the enhanced usability which was intended in the first place. Feng et al. [32] proposes the use of wearable devices such as eyeglasses, earbuds, and necklaces. The authors conducted a survey with 952 participants using Amazon Mechanical Turk in the US who had previous experience with voice assistants. Using a 7-point Likert scale (from strongly disagree to strongly agree), 47% of the participants affirmed (with scores of 6 or 7) that they are willing to use a wearable device to perform authentication. Ponticello [20] investigated the perceptions of 16 smart speaker users from Germany (15) and Italy (1) with an exploratory survey. The authors conducted of one-hour semistructured interviews in remote and face-to-face forms. These surveys considered the following hands-free scenarios: • These hands-free scenarios illustrate user journeys where performing an additional interaction in another device harms the usability. For example, one of the participants stated that "I think that's impractical, because if I have to pick up a smartphone to verify myself, then I could check it right away, via an app". The author argues that the main reason for users to interact with a voice assistant was that the interactions were effortless when compared with computers or smartphones, and that if an authentication mechanism takes away these features, the participants would be not be willing to adopt it [20]. Consider the invasive user journey described in Figure 2, where the user initiates a hands-free financial transaction by voice using a smart speaker. However, the authentication must be preformed in another mobile device, that the user did not desire to interact with in the first place. Although the authentication may be secure by using a one-time password (OTP), the additional user interaction in another device go against the original objective of providing a hands-free interaction. The financial transaction result is provided by the smart speaker to the user in a frictionless way. Voice biometrics was the preferred method of authentication by most of participants, but some had doubts regarding the voice recognition maturity, as illustrated by the following statement: "Currently no, not satisfied. You can tell the difference, it recognizes you by your voice, but even this recognition sometimes does not work, and I think that is very rudimentary. It's nice that you can see that this feature is under development, but it is far from mature" [20]. The results presented by Ponticello [20] indicate that users have preference for authentication mechanisms that do not require an additional interaction with other device. However, even considering that users do not want an invasive mechanism such as a user password, they desire for these financial transactions by voice to be secure. While voice biometrics is the most preferred method, it is still not mature, according to the participants. Therefore, we considered that our design must not require an additional interaction with another device by means of user action, and it should not rely on voice biometrics, as it still not sufficient to secure voice transactions [35]. Privacy We present some privacy considerations based on existing research regarding IoT and smart speaker privacy, privacy by design, and the Brazilian and European data privacy laws. Some open research challenges on IoT privacy are related to risk analysis, informed consent to the user (e.g., data collection and sharing), and context-aware user privacy preferences, considering the dynamic nature of IoT environments [36]. A major concern of smart speaker architectures hosted on a public cloud is privacy, as information disclosure cases have already occurred (e.g., unauthorized recording of personal conversations on Alexa). Where users demand context-aware data access control policies (e.g., who is at home and where the request comes from) [37], smart speaker user privacy perception is still at early stages (e.g., voice recognition services are available, but are not broadly used [38]). A participant in the smart speaker user perceptions survey executed by Ponticello [20] proposed an ideal data flow to not allow Alexa or Amazon to intercept any data. The author proposes a possible technical solution to provide a direct talk between a user and their bank by decoupling the smart speaker from the Alexa cloud after the initial connection to the third party (i.e., the bank), that will be responsible for audio processing, rather than Amazon. Privacy by design principles are applied to systems design to mitigate privacy concerns at an early stage. Gürses et al. [39] discuss engineering privacy by design, which consists of principles that may be applied to mitigate privacy concerns and achieve data protection compliance by integrating these principles into the system development process. The two case studies presented followed four main steps: functional requirements analysis to assess if the functionalities are feasible and well defined, as vague or implausible descriptions may lead to solutions that collect more data than necessary; data minimization, including state-of-the-art research to evaluate which data may be minimized or whether there are alternative architectures that could contribute to the data minimization; modeling attackers, threats, and risks; multilateral security requirements analysis to consider conflicting nonfunctional requirements and constraints (e.g., integrity, availability); and implementation and testing of the design in a solution that fulfills the functionalities, while using and revealing the minimal amount of private data. The Brazilian law LGPD ("Lei Geral de Proteção de Dados" in Portuguese or "General Data Protection Law" in free translation to English) was approved in August 2018, with its enforcement since August 2020. It describes personal data as information related to an identifiable or identified natural person; sensible personal data is described as information regarding racial or ethnic origin, political opinion, syndicate affiliation, affiliation to a religious, philosophical, or political organization, sexual life or health-related data, and genetic or biometrics data. Among others, this law has the following principles: data processing according to a finality, accorded with the data holder; free access by the data holder to data treatment information and data integrity; transparency regarding data treatment procedures and associated treatment agents; prevent damage related to personal data leakage; proof of compliance with data protection rules. However, sensible data can be treated with user's consent for specific uses, or even without user content if it is imperative to ensure fraud prevention or security of the holder, in the identification and authentication processes in electronic systems. In addition, children's consent is based on the consent of their legally responsible counterpart, and it foresees the need for risk and failure management for everyone who uses personal data. There is also centralized inspection by a national authority and the possibility of severe penalties in cases of noncompliance with the law [40]. Aleksanjan [41] investigated the compliance of virtual personal assistants (Amazon Alexa, Apple Siri, Microsoft Cortana, Google Assistant, Samsung Bixby) with the European Data Protection Framework, including the General Data Protection Regulation (GDPR) law, by analyzing the associated privacy policies. The results indicate that the five companies analyzed are not fully compliant with the GDPR, with Google standing out in the transparency aspect. Apple failed to inform data subjects about their rights; Amazon, Apple, and Microsoft did not adequately inform the data subjects about the purposes for processing their personal data, and relevant legal basis were not mentioned. The GDPR was adopted in April 2016 with enforcement since May 2018, and it describes rules regarding how personal data should be processed, the subjects' rights and the sanctions if the rules are not followed. It states that personal data must be processed lawfully, fairly, and in a transparent manner, with purpose limitation and data minimization. It also considers integrity, confidentiality, transparency, and accountability principles, conditions for consent, and specific conditions for children's data processing [41]. The following GDPR principles may be considered: • The purpose limitation means that data controllers should only use the collected data for specific purposes. These purposes should be explicit and determined at the time of the personal data collection; • The data minimization principle refers to personal data being adequate, relevant, and limited to what is necessary considering the purposes for which they are processed (i.e., data controllers are only allowed to collect personal data that is necessary to fulfill the specific purpose); • The transparency principle states that the processing of personal data should be transparent to the data subject, who must be knowledgeable about their rights and have the means to exercise it. Natural persons should be aware of the risks, rules, and rights in relation to the processing of personal data; • The accountability principle refers to the controller being able to demonstrate compliance with the privacy principles. We will specifically address the data minimization principle by investigating how the number of time series from a smart home impacts on the hands-free activity recognition for continuous authentication. Security In this section, we present a literature review on existing vulnerabilities and attacks in personal assistant systems. It is organized as presented in Figure 3, whose objective is to provide relationships among vulnerabilities, user interface devices, attacks, and their vectors, as described in the literature. Existing vulnerabilities in personal assistant systems threaten the security of financial transactions using the new interface. These vulnerabilities can be related to weak authentication mechanisms, or even incomplete voice application certification procedures. User interface devices are those on which human users can initiate an interaction with a personal assistant. Some can provide additional authentication mechanisms, such as desktop and smartphone devices. The attack vector is the compromised device used by the adversary to execute an attack. For example, it is possible to use the same device which the personal assistant is integrated into, or to use another device to initiate the attack. The attacks may have financial consequences, or even characterize an invasion of privacy. Some attacks might need more knowledgeable adversaries, but some are simple to execute, such as the replay attack. The existing authentication mechanisms, based on knowledge, possession, and biometrics, are susceptible to some threats and attacks. Something you know may be disclosed to an attacker, something you have may be lost, damaged, stolen, or cloned, and something about you may be replicated. Replay, phishing, social engineering, and man-in-the-middle (MitM) attacks could be performed by motivated attackers. For example, even a one-time password (OTP) authenticator that requires a manual entry of its output shall not be considered impersonation-resistant because the manual entry does not bind the authenticator output to the specific session being authenticated. Consider a MitM attack: an impostor verifier could replay the OTP authenticator output to the verifier and successfully impersonate the user [33]. The first attack found in the literature was the DolphinAttack [8], highlighted in red in Figure 3. Inaudible voice commands were recognized by commercial speech recognition systems, such as Siri, Google Now, and Alexa. These commands were produced by a specific hardware (amplifier and ultrasonic transducer), and the attacks were validated with experiments using smartphones from various vendors (e.g., Apple, LG, Asus, Samsung, Huawei, Lenovo). It was feasible to initiate a FaceTime call in iPhone, and to put smartphones in airplane mode using Google Now. The second attack is fake order [23], highlighted in green in Figure 3. In this attack, the adversary could exploit smart speaker vulnerabilities to place orders in Google Express and Amazon. The vulnerabilities considered are the reliance on single-factor authentication, and no physical access control mechanism in Alexa devices. The acoustic devices used in the attack are Bluetooth speakers and smart TVs. The third attack regards privacy concerns [42], highlighted in yellow in Figure 3. It was demonstrated that the voice application certification process is still immature: 100% of 234 Alexa skills and 39% of 381 Google actions with privacy violations were successfully certified. With no re-certification procedure, the voice application could be modified after initial certification without any additional validation, and personal sensitive information (i.e., name) could be collected in third-party servers by using children-intended Alexa skills. Phishing attacks [43] are also feasible in voice applications, as highlighted in black in Figure 3. The certification process provide weak control over personal assistant application names, so users could activate and interact with malicious applications whose names resemble trusted voice applications. Existing vulnerabilities in mobile devices could also be exploited if the personal assistant is deployed in a smartphone (e.g., Siri, Google Assistant). Collusion attacks can be performed using inter-app communication to perform elevated privilege actions using two or more Android applications [44]. This attack is highlighted in purple in Figure 3. There is a dangerous combination of voice input and output permissions in Android devices and the chain of attacks from one device to another. As stated in a study found in the literature [35], even solutions such as voice recognition could not be considered a panacea, as attacks could be initiated from nearby connected devices with speakers (e.g., smartphone or Bluetooth speaker). Inter-app communication, use of microphones by using intents, and the unique coupled permission of voice input and output, are described as potential threats in Android devices. The attack that could make a malicious application take control of voice input without user acknowledgment is highlighted in green in Figure 3. The inability to detect fake audio, and the reliance in a non-invasive unique authentication factor (e.g., voice) are some vulnerabilities that lead to replay attacks. Alexa speaker recognition system is not capable of distinguishing recorded audio from real voice, and Google speaker recognition system only performs voice verification on the wake word (i.e., "Ok Google"). If nearby devices with integrated speakers are compromised, then adversaries can record genuine voice commands and replay it afterwards, successfully performing voice replay attacks by leveraging the vulnerabilities present in the smart speaker user interface. The replay attack is highlighted in red in Figure 3. The attacks presented in this section are by no means exhaustive. For example, there are other attacks such as LightCommands [45], which consists of a signal injection attack by converting light to sound to obtain control on Amazon Alexa, Google Assistant, Apple Siri, and Facebook Portal, at distances up to 110 meters. Another attack that may be executed without user notice is CommanderSong [46], which is an attack generated automatically by integrating voice commands and background noise into songs, difficult for human listeners to detect. Design Goals We model the existing smart home voice transactions scenario by defining the bank server, internet banking, trusted mobile, and voice user interface components. Definition 1 (Bank Server-BS). The bank server is the bank authentication server which is integrated with bank back office services that effectively authorize and execute financial transactions (e.g., money transfers). Definition 2 (Internet Banking-IB). The internet banking mobile application makes banking services available to users. This application is deployed in the mobile device, and it is the existing interface for banking services. Definition 3 (Trusted Mobile-TM). A trusted mobile is a mobile application used for authentication with the bank server. The users and their trusted mobiles are associated in the enrollment phase. There is an injective relationship between an user and their trusted device (i.e., each trusted mobile is associated with an unique user, and each user is associated with an unique trusted mobile). Definition 4 (Voice User Interface-VUI). The voice user interface makes personal assistants, such as Alexa and Google Assistant, available to users. The voice commands and queries are performed by users in a frictionless manner (i.e., only the voice is needed to perform commands to voice user interfaces). The voice user interface communicates with the internet banking application in the same mobile device. Additionally, we consider the definitions of trusted IoT device, trusted location, and non-invasive authentication, which are fundamental building blocks of the proposed scheme. Definition 5 (Trusted IoT Device-TIoTD). A trusted IoT device is a proposed specific device used with the trusted mobile to perform authentication with the bank server. It is an additional device other than the existing mobile device used for internet banking, and it is deployed on a trusted location. The users and their trusted IoT devices are associated in the enrollment phase. There is an injective relationship between an user and their trusted IoT device. Definition 6 (Trusted Location). A trusted location is a place where the genuine user visits frequently. The frequency must be at least weekly, and the trusted location for each genuine user are registered in the enrollment phase. Examples of trusted locations are workplaces and residences. Definition 7 (Non-invasive Authentication). A non-invasive authentication for a voice financial transaction command is an authentication that does not require additional interactions for the end user, nor does it require that the end user must hold a wearable device. Examples of non-invasive authentication are voice authentication and the proposed authentication performed with a trusted IoT device in an autonomous manner. Considering the potential attacks and usability discussion presented, it is desirable that the proposed solution support hands-free voice transactions. Taking into account the reliance upon a trusted IoT device deployed in a trusted location, we envision the non-invasive user experience illustrated in Figure 4. The hands-free interactions are maintained in the three steps, from the financial voice transaction voice command to its result. The non-invasive authentication is supported by a challenge-response protocol with a trusted IoT device, and a continuous authentication is performed using the behavior learned in a trusted location (i.e., in this case, the smart home). The combination of trusted device and continuous authentication is performed in an autonomous way to support hands-free authentication, thus not requiring any additional user interactions, such as a confirmation in the mobile device. The considered requirements are presented: • The mechanism must provide mutual authentication; • The novel authentication mechanism must have at least the same security level as the existing invasive authentication mechanism (i.e., smartphone token in internet banking (IB)); • The authentication mechanism must have a comparable response time with the stateof-the-art schemes found in the literature; • The mechanism must be a non-invasive procedure. It should provide an acceptable security level, while maintaining the usability of the voice user interface (VUI). Threat Model In this article, we consider the replay and fake order attacks to the voice user interface available in a smart speaker device using a compromised nearby speaker, due to how easy it is to perform these attacks. Other attacks that require specific hardware or attacker's physical presence were considered more complex thus performed by more knowledgeable adversaries; therefore, they are considered to be outside the scope of this study. The threat model is defined below, and illustrated in Figure 5: Adversary's Goal: The adversary wishes to reduce the legitimate user's balance; Adversary's Knowledge: The adversary has access to some data samples from previous financial transactions voice commands collected from a nearby compromised speaker device (e.g., personal computer or smart TV); Adversary's Capability: The adversary can control the compromised nearby speaker device to play a previous voice command or an altered voice command whenever convenient. The random number used in the authentication can not be guessed by the attacker; Adversary's Limitation: The inter-app communication in the mobile device is considered secure (i.e., the adversary can not get the shared key in the mobile device by collusion attack [44]), as we rely in the mobile operating system security. The adversary does not have the resources to perform a massive attack to the bank server and compromise the shared keys in the bank's possession. The trusted IoT device is considered secure, and we consider that the adversary can not steal it from the legitimate owner in the trusted location. Internal attacks to the voice user interface, such as phishing, are out of the scope of this study. Assumptions and Hypotheses We consider the following two assumptions and two hypotheses in the development of the proposed non-invasive scheme. Assumption 1. It is desirable to not use existing mechanisms with high computational load, such as asymmetric cryptography, considering the constrained Internet of Things (IoT) devices' performances. As IoT devices are constrained (e.g., energy and computing power), using asymmetric cryptography methods is not desirable in this scenario due to their high computational cost [47]. Assumption 2. Unique biometrics authentication factors, such as voice biometrics, is not enough to guarantee security for financial transactions by voice. It is not possible to rely solely on the voice as a single authentication factor [35]. NIST states that biometrics shall be used only as part of multi-factor authentication with a physical authenticator (something you have) [33]. Inaudible, phishing, replay, and other attacks are proved in the literature, as described in Section 3.3. Hypothesis 1. A continuous authentication mechanism, based on behavior learning, can be based on data collected by Internet of Things devices deployed in a trusted connected location (e.g., the smart home). As proposed in the literature, IoT could be leveraged to provide context-aware, continuous, and non-invasive authentication services. The main benefit is related to usability, as the user do not need to carry intrusive devices or remember complex secrets. Such solution must recognize users' behavioral patterns to validate their identity [29] and may strengthen the authentication process at the time of access request and throughout the session, without requiring additional user intervention [48]. Hypothesis 2. Performance and privacy requirements for non-invasive user authentication are achieved with edge computing architecture and privacy by design. Edge computing follows the guideline of bringing the computation closer to where it is needed. It can reduce the latency of requests and reduce network costs [49,50]. The privacy by design are applied in the system conceptualization to consider privacy concerns [39]. Additional principles can be found in privacy regulations, as described in Section 3.2. Figure 6 presents the proposed architecture. Consider the scenario of a financial transaction by voice. The command is captured by the voice user interface, which is integrated to various natural language processing services. When a financial transaction intent is identified, an authentication request is sent from the voice user interface to the internet banking application, deployed in the same mobile device. IB then sends the authentication request to the bank server using a secure channel, such as TLS. A challenge response protocol is performed for mutual authentication between the trusted IoT device and the bank server (deployed in the cloud) with the trusted mobile as an intermediary, based on shared keys K1, K2, and K3. The physical unclonable function (PUF) is used as input for a pseudo-random number generator used in the challenge response protocol, and not directly with challenge response pairs (CRP). After the successful authentication, a continuous authentication is performed by leveraging the real time data collected by IoT devices for session management. If the behavior detected is different enough to a previously learned behavior, then the session is terminated. Otherwise, the session is maintained for next low value financial transactions. If the next transaction is a high value financial transaction, the challenge response protocol should be performed again. Architecture After the user identity is validated based on the possession of the mobile device with the trusted mobile application, and the behavior biometrics from the trusted location, the bank server must provide the authentication result to the voice user interface, which can play a final voice response to the user. The scope of this work, illustrated in Figure 6, is within the highlighted blocks with thick edges (i.e., bank server, trusted mobile, trusted IoT device, behavior learning, and IoT devices). According to different user security and privacy preferences, there is also a possibility of requiring an invasive procedure for high financial transactions, and to use or not the collected data by IoT devices. Considering the purpose limitation and transparency principles, the purposes, risks, and rights of the IoT data must be made transparent to the user prior to the possible continuous authentication deployment. The behavior learning must also support data minimization. The inter-app communications in the mobile environment are considered secure. Attacks on inter-app communications which exploit trusted mobile operational system vulnerabilities [44] are not considered in the scope of this work. The communication between IB and BS is also considered secure. This secure communication channel could be established with transport layer security (TLS), so the messages between internet banking application and bank server are considered to be in a secure communication channel. A final remark regards the authentication rate limit. Considering the following definition, after 5 wrong tries, the non-invasive user authentication should be disabled and a knowledge-based invasive authentication may be offered as an alternative. Definition 8 (Authentication Rate Limit). NIST [33] suggests a rate limiting of 5-10 consecutive tries with a back-off time exponentially increasing. Enrollment The enrollment process is classified as either simple or complex. Complex schemes are composed of extensive register of challenge response pairs of devices with integrated PUFs [26,[51][52][53]. Our proposal is based on shared secrets, which is considered a simple enrollment process in terms of scalability and ease of management. We also do not directly use the CRP because of the associated enrollment process. Definition 9 (PUF). Physical unclonable functions (PUF) are used in challenge-response protocols either directly or as sources of randomness. By leveraging their unique physical characteristics originated from the manufacturing process, strong PUFs generate large challenge response pair (CRP) spaces [54,55]. PUFs can also be used in session key generation [56]. Definition 10 (Complex Enrollment). A complex enrollment process is defined as a manual and onerous operation to the end user. Some examples are the offline provisioning process of PUFs and facial and voice biometrics registration. As illustrated in Figure 7, PUF-based enrollment consists of an offline provisioning procedure wherein the PUF chip is directly connected to a fog/edge device (considered a server entity). A single random serial number is the id of the PUF device and is sent together with the response to challenges issued by server. The challenge response pairs (CRP) are mapped with the serial number and sent from server to the cloud in a secure manner. For example, it might consist of a generation of 2 N CRP for a strong arbiter PUF of N bits (i.e., for a challenge of 16 bits, there are more than 60,000 CRPs) [57]. The direct usage of CRP generated by PUF requires that the server stores a large amount of CRP pairs, escalating proportionally to the number of devices [58]. Considering that the user already has an invasive authenticator registered with their bank (e.g., smartphone), the enrollment of the trusted IoT device can be classified as a binding of an additional authenticator at existing authentication assurance level. According to NIST [33], in this case, the user must authenticate with the existing authenticator to add the new authenticator. After successful addition, a notification should be issued to the user via independent mechanism, such as an email address previously associated with the user. The user must register its trusted locations (according to the personal privacy preferences), trusted mobile, and trusted IoT device. The shared keys between trusted IoT device, trusted mobile, and bank server are registered in each entity. These shared keys must be at least 128 bits. The identifiers (i.e., BS-bank server; TM-trusted mobile; tIoTd-trusted IoT) are also registered. Continuous Authentication Considering the following session definition, the proposed scheme uses the continuous authentication based on smart home behavior to detect anomalous situations where the session must be terminated. The session is transparent to the user for increased us-ability, and the accuracy of the smart home continuous authentication influences the user experience directly. Definition 11 (Session). Poor usability of frequent invasive user authentication motivates users to perform workarounds, such as cached unlocking credentials that negate the authentication freshness. A session host performs session management for increased usability and security. A session is initialized in response to an authentication event by a session subject. The session host generates a secret of 64 bits for session binding and provides it to the session subject. A session may be terminated by inactivity timeout, explicit logout event, or other events [33]. The smart home continuous authentication module learns the usual behavior of the household based on the data collected by IoT devices deployed in this trusted location. When an user is authenticated successfully by the challenge-response protocol with the trusted IoT device, the continuous authentication begins to monitor the smart home events in real time. If the module detects an unusual behavior, it terminates the session. As NIST [33] also states, the reauthentication procedure to prevent session termination may be performed by the presentation of a biometric authenticator, which motivated the usage of the behavior biometrics to provide continuous authentication for session management. Challenge-Response Protocol In this section, the authentication protocol for mutual authentication is presented. Formal security analysis of the proposed protocols is also presented. The authentication protocol is based on the SKID3 protocol [59]. SKID3 is a 3-step protocol that supports mutual authentication, and it is suitable for devices with limitations, as stated in [60]. It is based on random numbers as the protocol nonces. We consider three shared keys to adequate these extended protocols to support the three entities (BS, TIoTD, and TM). The key K1 is the shared key between BS and TIoTD; key K2 is the shared key between TIoTD and TM; key K3 is the shared key between BS and TM. Protocol Description The bank server generates a random number as the first challenge, associates the trusted IoT device identifier, and performs two symmetric cryptography procedures: a keyed hash with shared key K1, and another keyed hash with shared key K3 (M1). The trusted mobile receives the message, decrypts the first layer with K3, and sends the result to the trusted IoT device (M2). The trusted IoT device receives its identifier and the first challenge, generates another challenge, and provides the answer with bank server identifier with two procedures: keyed hash with K1, and keyed hash using K2 (M3). The trusted mobile receives the message, decrypts the first layer with K2, and sends the result to the bank server (M4). The bank server receives its identifier, the second challenge, and the answer to the first challenge, computes the answer to the second challenge, performs a keyed hash with K1, and sends it to the trusted mobile (M5). The trusted mobile receives the message, performs another keyed hash with K2, and sends the result to the trusted IoT device (M6). The trusted IoT device must decrypt the final message with K2 and K1 to verify bank server's identity proof. Consider the following notation for the protocol description: The proposed authentication scheme is illustrated in Figure 8. Formal Security Analysis An automated security analysis using Scyther tool is presented for the proposed protocols for trusted mobile and trusted IoT device scenarios. Scyther is an open-source tool that allows verification and analysis of security protocols. It is based on a formal semantics of security protocols to analyze different classes of attacks, and possible protocol behaviors [61]. Scyther provides a graphical user interface, a Python command line interface, and can be used in Windows and Linux operational systems. In the unbound mode, Scyther can output proof and attack trees, and in the bound mode, Scyther states that no attacks exist within a certain bound, or showcases some identified attacks. Its input language resembles C/Java-like syntax, and allows the modeler to describe protocols by defining a set of roles, which are defined by a sequence of events [62]. Scyther allows the verification of claims related to authentication properties of analyzed protocols. These properties are defined in Reference [63]: aliveness, weak agreement, non-injective agreement, and injective agreement. Definition 12 (Aliveness). After entity A completes a run of the protocol, if another entity B is apparently active, then the protocol guarantees aliveness of entity B to entity A. Definition 13 (Weak Agreement). If the protocol guarantees aliveness of entity B to entity A, and if the protocol also guarantees aliveness of entity A to entity B, then a weak agreement is guaranteed between entities A and B. Definition 14 (Non-injective Agreement). A protocol guarantees an initiator A non-injective agreement of another agent B on a set of data D if entities A and B have weak agreement, and the two agents agreed on the data values present in D. Definition 15 (Injective Agreement). A protocol guarantees an initiator A injective agreement of another agent B on a set of data D, if entities A and B have non-injective agreement, and each protocol run of A corresponds to a unique run of B. This one-one relationship may be important in financial protocols. The proposed protocol for the trusted IoT device scenario was modeled based on existing Scyther models [64] of the ISO 9798 standard for entity authentication, which were used for the conception of SKID3 protocol [59]. The Scyther tool identified that challenges must also be protected, so that entities respond only to valid entry challenges. Three keys were necessary: K1 for trusted IoT device and bank server, K2 for trusted mobile and trusted IoT device, and K3 for trusted mobile and bank server. The properties secrecy, aliveness, weak agreement, non-injective agreement, and injective agreement could be proved using the Scyther tool for the extended SKID3 protocol with the three shared keys. Trusted IoT Device A proof of concept is implemented for the proposed protocol. It is composed of a mobile application for the authentication module, a server for bank server emulation, a web application for voice user interface emulation, and an embedded application for the trusted IoT device. This proof of concept is designed to have the same security level as state-of-the-art, non-invasive, PUF-based authentication, with the benefits of supporting a non-invasive authentication with a simple enrollment process, and the use of PUF to improve nonces randomness. Methods and Materials The mobile application TM was developed for Android devices in Java, the BS server was developed in Python, and the webpage for the VUI was developed in HTML/Javascript using available libraries for Android [65,66] and Python [67,68]. The trusted IoT device is developed based on existing python libraries [67][68][69], and integrated Bluetooth 4.1 support for the Raspberry Pi 3 (https://www.cnet.com/, accessed on 11 December 2021). The devices used were an Android smartphone Samsung S20, a Windows laptop with 8GB RAM, a router with 802.11 communication, and a Raspberry Pi 3, as illustrated in Figure 9. The proof of concept was executed in a local environment, with websockets communication, over WiFi and USB communication. Shared keys of 136 bits were used for the keyed hash (HMAC) with SHA256, and for the version with AES-256 symmetric key encryption. All the code and response time results for the proof of concept are available online under a GPL-3.0 License (https://github.com/vthayashi/SKID3-PoC, accessed on 11 December 2021). Tests and Implementation The proof of concept was evaluated with the four following tests: 1. Correct shared keys; 2. Correct shared key K1, correct shared key K2, and wrong shared key K3; 3. Correct shared key K1, wrong shared key K2, and correct shared key K3; 4. Wrong shared key K1, correct shared key K2, and correct shared key K3. The tests were successfully executed, as shown in Figures 10 and 11, with the proposed protocol using SHA-256 and AES-256, respectively. Performance Analysis The response time for the extended SKID3 protocol was obtained experimentally. A total of 1000 authentication requests are performed for each scenario, with a 2-s interval between requests. A normal distribution was assumed for experimental results; thus a confidence interval was obtained with a confidence level of 95%. Considering a normal distribution for SHA-256 hash experimental results, sample size of 1000, computed standard deviation of 172.18 ms, and a significance level of 5%, we have an average response time of 392.37 ms ± 10.67 ms (i.e., from 381.70 ms to 403.04 ms), with a confidence level of 95% with serial communication (USB). Considering a normal distribution for SHA-256 hash experimental results, sample size of 1000, computed standard deviation of 189.19 ms, and a significance level of 5%, we have an average response time of 542.76 ms ± 11.73 ms (i.e., from 531.04 ms to 554.49 ms), with a confidence level of 95% with wireless communication (WiFi). Considering a normal distribution for AES-256 experimental results, sample size of 1000, computed standard deviation of 146.83 ms, and a significance level of 5%, we have an average response time of 383.76 ms ± 9.10 ms (i.e., from 374.66 ms to 392.86 ms), with a confidence level of 95% with serial communication (USB). We have an average response time of 578.96 ms ± 11.99 ms (i.e., from 566.97 ms to 590.95 ms) with a confidence level of 95% with wireless communication (WiFi), with a normal distribution for AES-256 experimental results, sample size of 1000, computed standard deviation of 193.46 ms, and a significance level of 5%. Continuous Authentication The interested reader may consider previous work, which describes in detail the behavior learning in a smart home [70]. In this article, we focus on presenting how such a behavior factor may be integrated into continuous authentication to support session management (i.e., session beginning and end) in the proposed scheme. Our approach consists of leveraging energy consumption data collected by IoT devices to detect handsfree activity detection, as further explained. One additional requirement is related to the data minimization principle described in Section 3.2. As the personal data must be relevant and limited to what is necessary, we verify how granular the collected data should be to enable the recognition of hands-free activity detection. Considered Scenarios We consider two scenarios for hands-free voice interactions. The first one happens whenever the user does not have IoT devices in the connected trusted location, or in the initial learning phase of the behavior learning model. In this situation, the user can activate or deactivate the hands-free authentication alternative, by using an invasive authentication method (e.g., token in the mobile device). If the user knows that they might perform a financial transaction by voice in the near future, it is possible to activate the handsfree authentication in advance, and disable it after the financial transaction has been performed, in a similar way to how users unlock their virtual credit cards in advance. This initial manual phase provides data labeling (i.e., the timestamps the hands-free financial transactions are performed), which is used to automatize the hands-free activity detection in the second scenario. The second scenario is the non-invasive authentication for financial transactions by voice, with automated session management supported by hands-free activity detection. The user can specify in which contexts he/she wishes to activate the hands-free interactions automatically. Whenever the user is in a hands-free context and performs a financial transaction by voice, the non-invasive authentication with the trusted IoT and mobile devices is performed for increased usability. As described in Section 3.1, some hands-free scenarios with financial transactions are money transfer in dinner party with friends, and payment while watching TV. Some works found in the literature investigate daily activity recognition and forecast in smart homes. It is possible to classify some of these daily activities as hands-free activities: cooking, eating, reading, washing dishes, and watching TV from [71], and cooking, eating, relaxing, and washing dishes from [72]. The proof of concept for hands-free activity detection in smart homes is presented in Figure 12. The raw data is collected by the IoT devices installed in the trusted location. In the data preparation step, the data from different devices is aggregated in a dataset consisting of events that occurred in specific time slots, and in a specific location inside the household (e.g., higher energy consumption in the kitchen in the first hour of a workday). Based on the events metadata and calendar of the smart home inhabitants, the events are labeled based on a subset of hands-free activities (i.e., watching TV, eating lunch and dinner). With a relevant dataset with data of at least one month, the hyperparameter tuning, model training, and validation steps are performed. Based on the existing promising results of daily activity recognition with support vector machines (SVM) [73], we selected this machine learning model to develop our proof of concept. If the model has an accuracy of over a specified threshold, then the model is deployed, and made available to detect hands-free activities in real time. This model is integrated with the proposed scheme to allow automatized session management. Testbed Data Collection The smart home testbed is the same household with four inhabitants presented in a previous work [70], but using data collected with energy monitoring sensors instead of the light and motion sensors. The data was collected from June 2021 to August 2021, a total of 2 months. The smart meter used is the prototype presented in [74]. The smart meter and smart plugs used were based on the previous works on data collection using the ESP8266 development board [70,75]. In this work, we will use the consumption of the kitchen and living room household sector collected using the smart meter, and granular data collected from the kitchen (air conditioner, electric rice pan, and electric oven) and the living room (home office station, light bulb, TV) using some smart plugs. A total of 7 energy monitoring sensors are used, resulting in a dataset of hour granularity, 7 time series, and 1683 rows. Proof of Concept In the activity labeling step, we considered a fixed time window of one hour, the features of the most frequent event in the current window, a subset of the features used in [76]. Additionally, the day of week is included as a feature, based on Reference [77]. The resulting 9 features are: the 7 energy consumption time series, the weekday, and the hour. The hands-free activities of the resident that works daily in the living room home office station were labeled manually (i.e., 1 if hands-free activity, 0 if not). Most of the events are related to lunch break, and watching TV after the work schedule (usually at night). The analysis using the Lasso Regression Model from the Python sklearn library [78] showed that the most important features are the kitchen electric oven, the living room TV, and the living room home office station. The SVM model tuning was performed using the GridSearchCV from sklearn, with 5-fold cross validation to optimize hyperparameters such as kernel, gamma, and degree (where applicable), using the f1-score metric. The dataset (1683 elements) was partitioned into the training dataset (70%, i.e., 1178 elements) and the test dataset (30%, i.e., 505 elements). The experiments covered a total of 4 scenarios to investigate which time series must be used for the hands-free activity recognition. Scenario A includes the 7 time series in kitchen (air conditioner, electric rice pan, and electric oven), living room (home office station, light bulb, TV), and the household sector of the living room and kitchen appliances. Scenario B consists of 5 time series in the kitchen (electric rice pan and electric oven) and living room (home office station, light bulb, TV). Scenario C consists of the 3 most important features, according to the Lasso Regression analysis: the kitchen electric oven, the living room TV, and the living room home office station. The last case (scenario D) consists of kitchen appliances: the air conditioner, electric rice pan, and electric oven. The recall metric is specially important to understand how many relevant hands-free activities were classified correctly when compared to the total of hands-free activities, as illustrated in the confusion matrix of scenario A in Figure 13. The results of the 4 scenarios are presented in Table 2 considering the accuracy and recall metrics with 5-fold cross validation. The accuracy results might be compared with the general activity recognition of the SVM model of 91.52%, found in the literature [73]. Assumptions and Hypotheses Assumption 1 states that it is desirable to not use existing security mechanisms with high computational load, considering the IoT devices' constraints. The proposed challengeresponse protocol is based in the hash function SHA-256, and symmetric encryption with AES-256. Even though the SHA-256 version does not require decryption, as the AES-256 does, both versions of the 6-message protocol are based on light security mechanisms, and provide mutual authentication with a simple enrollment process. As specified in Assumption 2, using the voice biometrics as an unique authentication factor is not enough to guarantee security for voice-triggered financial transactions. Our solution combines the trusted device paradigm (i.e., associated with "what you have") and smart home behavior to provide a non-invasive authentication mechanism. The continuous authentication mechanism based on data collected by IoT devices could be achieved by a hands-free activity recognition based on energy consumption data from a smart home testbed. Thus, Hypothesis 1 could be proved successfully, considering the smart home as a trusted connected location. The performance requirements for non-invasive user authentication were achieved by relying on the local websockets communication between the trusted mobile and trusted IoT device. Additionally, the context information (i.e., the presence of the trusted mobile in the trusted connected location, both associated with the same user) was employed in our solution. Therefore, the performance aspect of Hypothesis 2 was proved successfully, based on the results of the trusted IoT device proof of concept. As for the privacy requirements associated with Hypothesis 2, the results observed in Table 2 show that the hands-free activity recognition model performed better in the scenario B with fewer time series when compared with scenario A. Considering the data minimization principle defined in Section 3.2, a feature engineering process could be employed to reduce the number of time series to the ones which are relevant to the specific task of hands-free activity recognition. However, it is essential that such feature selection must consider time series from different smart home rooms, as one may infer by comparing the recall metric from scenarios C and D. Known Limitations Even though behavioral biometrics are suitable for the non-invasive authentication scenario, this kind of biometrics are less likely to express authentication intent because they do not need a specific action of the end user [33]. Another shortcoming of the behavioral biometrics in the smart home scenario is that the behavior learning proposed solution is highly dependable on the deployment context (i.e., which IoT devices are available in each smart home). If an opponent compromises a trusted mobile from a specific user, no other user gets compromised. This feature decreases substantially the potential attacks scalability. Additionally, if our scheme is used with different bank server identifiers for each user, than there is less risk involved if this secret were also to be compromised. The main vulnerability in our scheme is the shared key capture in the mobile device, as the mobile device is a general use device and could be subject to other attacks. Properly protecting the shared keys in the trusted IoT device is also of foremost importance, considering the possibility of physical attacks in this devices (e.g., side-channel attacks). We consider that obfuscation, key splitting, and secure multi-party computation mechanisms may help to enhance the security of our solution. Another issue is the potential threat of quantum computing in the existing systems that supports financial services. Novel technologies that enable innovations, such as the Blockchain for decentralized financial transactions, face this quantum computing threat as well [79]. Even though our proposed hands-free authentication does not use public key encryption that is threatened by the quantum computers [80,81], it is relevant to include a discussion regarding this issue. We may consider the Bitcoin scenario as an example of how the quantum computing threat can be solved. Bitcoin is a decentralized cryptocurrency that uses the Blockchain technology to perform a consensus without the need to rely on a trusted third party [82]. However, the most vulnerable aspect in the case of Bitcoin is the classical signature scheme [83], which is not used in our proposed hands-free authentication scheme. As far as the authors are concerned, Ikeda [84] is the first to solve the problem of double spending by using quantum teleportation [85]. The quantum teleportation is a method to transport quantum information to another area [82] related to the quantum communication research field. Ikeda [82,84] also uses the quantum digital signature schemes of Gottesman and Chuang [86], given by a quantum one-way function. In the case of the hands-free authentication proposed in this work, we use an additional authentication factor based on behavior, which is independent of the cryptography used. The trusted device authentication factor rely on symmetric algorithms and hash functions that are relatively resistant to quantum computers [80,81]. According to NIST, the impacts of large scale quantum computers on AES and SHA algorithms are larger outputs for hash functions and larger key size for symmetric encryption [81]. It is possible to follow these guidelines and additionally investigate if a post quantum cryptography algorithm may be applied in our scheme. However, such investigation must consider post-quantum cryptanalysis [87] and proper evaluation [88]. Cheng et al. [89] developed an Assembler implementation of the SHA-512 hash algorithm for the ATmega 8-bit AVR microcontrollers with 128 kB flash memory and 4 kB RAM. This version of the SHA512 hash algorithm is comparable to the SHA256 implementation of Balasch et al. [90] considering short messages of 500 bytes each. Therefore, it is feasible to use the SHA512 in our scenario considering the constraints of IoT devices to make the proposed scheme resilient to quantum computers. Comparison with Related Work As shown in Table 3, the proposed solution has an accuracy of 97%, comparable to REVOLT [30] and VAuth [32]. However, REVOLT has a complex enrollment process because it is based on biometrics and behavior authentication factors, which require training time or specific biometric registration, and VAuth is an invasive solution, according to Definition 7, which defines wearable-based solutions as invasive. The response time of 383 ms presented by our proof of concept with the SHA-256 version is comparable to VAuth and Wivo [31]. Still, Wivo relies only on the behavior authentication factor entirely thus it has a complex enrollment process because of its training phase. UCFL [26] presents the best response time, though it also relies in an unique trusted device authentication factor. Even though our approach was evaluated with fewer users than the related work, we have validated in a real-world setting (i.e., "in the wild"), with no control over the inhabitants' routine in the 2-month data collection period. Moreover, we have performed activity and person recognition in a multi-user scenario with 4 persons. VSButton [23] recognizes activities, but not who is performing them. Wivo performs voice liveness of certain persons, but does not recognize the associated activity, and it was validated with two users in the multiple user scenario [31]. PALOT relied on a dataset from an apartment where participants were asked to perform certain daily activities while interacting with the deployed sensors, so it presented a certain degree of control over the inhabitants activities [29]. WifiU conducted experiments to collect gait data in a typical laboratory with 50 square meters area, which is a controlled environment [24]. Considering that the multi-user scenario is a challenge for smart home algorithm development [91,92], and that it is difficult to implement a granular access control for smart speakers inside multi-user environments, we advocate that our approach helps to close this research gap, by presenting a way to perform hands-free activity recognition from a specific person. As shown in Table 2, it relies on device-level energy consumption data from two different environments for acceptable accuracy and recall results. A comparison with third-party metrics for authentication is available in Table 4. This framework includes security, deployability, and usability aspects, so it is suitable for analyzing the proposed authentication scheme in general [93]. We chose the categories that apply to our hands-free authentication scenario: effortless to remember, nothing to carry for the end users, easy to learn, and infrequent errors under the usability aspect. Under deployability, we assess if the cost is negligible per user. We also analyze whether the mechanism is resilient to leaks from other verifiers, whether it is resilient to theft, and whether it requires explicit consent. The 4-digit spoken PIN used in Alexa has high usability and deployability, but weak security. The mobile token is secure but invasive, thus lacking in usability, and the wearable solution requires the users to carry an additional device, with considerable cost per user. Voice and behavior biometrics are not yet mature, with frequent errors, and they face the issue that a leak from another verifier may compromise the factor entirely (i.e., the revocation is limited). We use the behavior factor as a secondary factor to identify handsfree scenarios and reduce the potential attack window, and we rely on the autonomous trusted device to not require explicit consent, which is a trade-off between security and usability. However, the cost is not negligible per user, and the case of trusted device theft is not considered within the scope of this work. Entrophy is also an essential aspect for user authentication. Considering other authentication schemes presented in the literature, displayed in Table 5, the trusted autonomous device used in the proposed scheme provides a comparable security. To the best of the authors' knowledge, our proposal is one of the first works to combine trusted device and behavior factors to perform user authentication in a non-invasive manner with a simple enrollment process. Our mechanism supports mutual authentication, with a comparable security level to the existing invasive authentication mechanism, and it presents a comparable response time with state-of-the-art schemes found in the literature. The proposed solution allows non-invasive authentication for financial transactions by voice while the user is performing hands-free activities in a trusted connected location (i.e., the smart home). The architecture considers using the PUF in the random number generator instead of directly application of challenge-response pairs, which would result in a complex enrollment process. In addition, it integrates the data minimization principle in the behavior learning process to respect the user privacy. Final Considerations Considering financial transactions by voice commands to personal assistants, we proposed a non-invasive mutual authentication protocol based on trusted IoT devices and hands-free activity recognition in a smart home. Formal security analysis with the Scyther tool guided the definition of the extended versions of existing light protocols to be used in the non-invasive authentication scheme. The first proof of concept was developed for the Android operational system, integrated with a bank server emulated in Python, with websockets communication, over a local WiFi network. It also had an authentication module in a trusted IoT device, implemented in a Raspberry Pi 3. The second proof of concept presented how it is possible to provide hands-free activity recognition based on energy usage data collected by smart meter devices. It also employed different scenarios to investigate which subset of time series features is necessary to maintain acceptable recall and accuracy results, considering the data minimization principle. As future work, the number generator for the trusted IoT device with physical unclonable functions (PUFs), to provide better nonces for the challenge-response protocol, could be evaluated experimentally. The dynamic random access memory (DRAM), PUF-based key generation found in the literature [97] is considered for future PUF design. Chen et al.'s solution uses intrinsic sensors available in commodity devices, and provided a proof of concept for Raspberry Pi. A relevant work for random number generator presented a proof of concept with an additional static random access memory (SRAM) and a Raspberry Pi device [98]. We also consider the RC-PUF, which is based on additional passive components (i.e., resistors and capacitors) [99]. Another research opportunity is to validate the user's context in a more granular way, by verifying which room a certain person is in, based on direct communication between the trusted mobile and trusted IoT devices (e.g., Bluetooth). Another research opportunity is to use obfuscation, key splitting, and secure multi-party computation mechanisms to enhance the security associated with key management in trusted mobile and trusted IoT device. Additional security validations may be performed with ProVerif [100].
2022-02-12T16:21:59.573Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "f8aed8ea1e33e8a322c6cec00001ec8c2e9611c8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/4/1325/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e0744518bbd82fa8522998d8d431b091a59404f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
267557114
pes2o/s2orc
v3-fos-license
Surgical resection for hepatocellular carcinoma: a single-centre’s one decade of experience Background and aims: Liver cancer is the third leading cause of global cancer deaths, and hepatocellular carcinoma is its most common type. Liver resection is one of the treatment options for hepatocellular carcinoma (HCC). This study aims to explore our hospital’s more than a decade of experience in liver resection for HCC patients. Methods: This is a retrospective cohort study on HCC patients undergoing resection from 2010 to 2021 in a tertiary-level hospital in Jakarta, Indonesia. Mortality rates were explored as the primary outcome of this study. Statistical analysis was done on possible predictive factors using Pearson’s χ2. Survival analysis was done using the Log-Rank test and Cox Regression. Results: Ninety-one patients were included in this study. The authors found that the postoperative mortality rates were 8.8% (in hospital), 11.5% (30 days), and 24.1% (90 days). Excluding postoperative mortalities, the long-term mortality rates were 44.4% (first year), 58.7% (3 years), and 69.7% (5 years). Cumulatively, the mortality rates were 46.4% (1 year), 68.9% (3 years), 77.8% (5 years), and 67.0% (all time). Significant predictive factors for cumulative 1-year mortality include large tumour diameter [odds ratio (OR) 14.06; 95% CI: 2.59–76.35; comparing <3 cm and >10 cm tumours; P<0.01], positive resection margin (OR 2.86; 1.17–77.0; P=0.02), and tumour differentiation (P=0.01). Multivariate analysis found hazard ratios of 6.35 (2.13–18.93; P<0.01) and 1.81 (1.04–3.14; P=0.04) for tumour diameter and resection margin, respectively. Conclusion: The mortality rate of HCC patients undergoing resection is still very high. Significant predictive factors for mortality found in this study benefit from earlier diagnosis and treatment; thus, highlighting the importance of HCC surveillance programs. Introduction Liver malignancy is the third most common cause of cancer mortality globally, with 830 180 deaths in 2020 [1] .In 2020, Indonesia accounted for 20 920 deaths of liver cancer patients [2] .Most (75-85%, globally) primary liver cancer cases are due to hepatocellular carcinoma (HCC) [3] .The HCC is also prominent in Indonesia; an Indonesian tertiary-level national referral hospital reported 158 HCC cases diagnosed from 2015 to 2017, with a 94.4% 3-year mortality rate [4] . Liver resection is one of the HCC treatment modalities, along with liver transplant, radiofrequency ablation (RFA), transarterial chemoembolization (TACE), hepatic arterial infusion chemotherapy (HAIC), and systemic therapy [5] .Liver resection is the first-line treatment option for patients with adequate liver function (Child-Pugh class A or B) and no extrahepatic spread [5] . HIGHLIGHTS • Liver resection is one of the treatment options for hepatocellular carcinoma.It is the first-line treatment option for patients with adequate liver function and no extrahepatic spread.Patients are also based on Barcelona Clinic Liver Center classifications A and B. • This article explores hepatocellular carcinoma patients undergoing liver resection from 2010 to 2021 in a tertiary-level hospital in Jakarta, Indonesia.The total population is included, with a total of 91 cases.• Most patients are middle-aged males with chronic viral hepatitis.More than three out of four patients are assigned Child-Pugh A. • A total of 201 liver segments are resected.Most of the procedures were done in the right lobe of the liver.The 5year mortality rate of liver resection in hepatocellular carcinoma is 77.8%, with a quarter of those patients dying in the first 90 days.The mortality increased by 20% in the first and third years.• Significant prognostic factors include tumour diameter, positive resection margins, and differentiation.Patients with more than 10 cm tumours are ten times more likely to die in the first 90 days and fourteen times more likely to die in the first year. Patients undergoing liver resection also have a better (36%) threeyear mortality rate than all treatments (94.4%) [4] .However, only 30% of HCC patients are eligible for resection; in addition to adequate liver function and no extrahepatic spread, the resection must also preserve 30-40% postoperative remnant liver volume [6][7][8][9] .Due to these strict prerequisites, liver resection patients are a distinct subset from all HCC patients globally.There has been limited evidence of liver resection for HCC in Indonesia.Only two studies were found; one is an abstract-only study with limited scope, and the other only analyzes results from a 1-year period [10] .Therefore, this study aimed to present the liver resection experience for HCC patients in Indonesia, with mortality rates being the primary objective.In addition, this study evaluated the predictive factors related to the mortality rate for resection patients in our centre. Study design This is a retrospective cohort study on patients undergoing liver resection due to HCC from 2010 to 2021 in a tertiary-level hospital in Jakarta, Indonesia.We analyzed HCC cases to explore patients' characteristics and possible predisposing factors of mortality after surgery.This study has been reported in line with the STROCSS criteria [11] . Study population The inclusion criteria for this study were patients undergoing liver resection in our centre from 2010 to 2021 due to a confirmed diagnosis of HCC.Patients undergoing liver resection in other hospitals were excluded, even though diagnosis or further care was done in our centre.In addition, patients with other malignancies or undergoing other treatment methods were excluded. This study used a total population sampling.We included 91 cases, from the first liver resection in 2010 to the most recent in 2021.Clinical, laboratory, and radiological data were collected from the medical records.Follow-up was done once in January 2022 to confirm mortality status by contacting patients or family members. Treatment of hepatocellular carcinoma patients in our centre HCC patients are managed by a multidisciplinary team (MDT) consisting of hepatologists, radiologists, pathologists, radiation oncologists, surgeons, and other specialists related to the patient's condition.Baseline data -including age, sex, and important clinical data (presence of ascites, oedema, jaundice, etc.), were collected during the initial outpatient visit.Laboratory evaluation was done within one month of the operation, including prothrombin time, bilirubin, albumin, alpha-fetoprotein, and liver enzyme levels.The Child-Pugh (CP) score was then calculated using clinical (presence of encephalopathy or ascites) and laboratory (prothrombin time, albumin, and bilirubin levels) data [12] .The patient's liver function is classified from best to worst liver function according to the CP classification A to C [12] .Preoperative diagnosis was confirmed by pathology and radiology.The number of tumours and their sizes were also evaluated in preoperative imaging. A weekly MDT team meeting discussed the treatment options for confirmed HCC patients.Treatment assignment is done in consideration of the Barcelona Clinic Liver Center (BCLC) staging system, patient preference, and other clinical or socioeconomic considerations [13,14] .BCLC staging system stated that resection is considered for patients at very early (BCLC 0) or early (BCLC A) HCC stages; this includes CP class A patients with a single tumour or less than three small ( < 3 cm) tumours [13,14] .However, in some cases, clinical and socioeconomic considerations take precedence.One important socioeconomic consideration is that our national health insurance may not cover other treatment options (such as chemotherapy).Therefore, patients with BCLC B staging may be assigned to liver resection in our centre; if the remnant liver is adequate, a free resection margin is possible, and both the MDT and patients have agreed to the treatment. Being a national referral hospital with a long waiting list for surgery, HCC patients may have a delay of 3-9 months from assignment to the actual surgery.Patients assigned to surgical resection underwent either laparoscopic or open surgery.One to four segments of the liver were removed.Tissue samples were taken for further pathologic examination, including the presence of cirrhosis and cancer tissue in a 1 cm area from the surgical margin.Histological staging using Edmondson-Steiner grade and assessment of tumour differentiation was also done postoperatively. Study variables The primary outcome of this study is the mortality rates.We confirmed each patient's mortality status by evaluating the medical records and contacting patients and family members in January 2022.Mortality dates are also collected for patients who did not survive until 1 January 2022.We calculated the survival duration by subtracting the mortality and resection dates.Survival time was evaluated from the first liver resection (for patients who undergo multiple procedures).Then, we use the survival duration to find the postoperative mortality (in hospital, 30 days, and 90 days), long-term mortality (1 year, 3 years, and 5 years), and cumulative mortality rates.The long-term mortality rates excluded the postoperative mortalities; the cumulative mortality rates included them. The secondary outcomes are factors predicting mortality after resection.Demographic characteristics (age > 60 years and sex) [14] , preoperative, operative, and postoperative findings were independent variables.Preoperative variables include the aetiology of HCC (viral hepatitis B, hepatitis C, or non-hepatitis B or C); alpha-fetoprotein (AFP) level ( > 400); CP Class (A or B); and BCLC Classifications (A or B); as well as the number (single or multiple/more than one tumour); and diameter of the tumour (< 3, 3-5, > 5-10, or > 10 cm [5,7,9,15] .A subgroup analysis on the Child-Pugh scores excluding patients without cirrhosis was also done.The operative variables include the number of resected liver segments and the method of surgery (open surgery or laparoscopy).The postoperative variables are mostly histopathologic findings on samples taken during surgery.The findings include surgical margins, the presence of cirrhosis, vascular invasions, and tumour grading.A free surgical margin is defined as no malignant cell found within 1 cm of the incision [16] .Tumour grading was done according to the Edmondson-Steiner and the WHO classifications [17][18][19][20] . Statistical analysis Statistical analysis was done using the IBM SPSS version 24.Patient characteristics are presented using the median and interquartile ranges for continuous data and percentages for categorical data.Bivariate analysis used Pearson's χ 2 , comparing the independent variables with the 90-day, 1-year, cumulative 1year, and all-time mortality.Survival analysis was done using the Log-Rank test (bivariate).The censor date is 1 January 2022.Statistically significant results from the Log-Rank test will be tested for multicollinearity.Variables that are statistically significant and free from multicollinearity were further analyzed using Cox Regression, and a hazard ratio with a 95% cCI was reported. Ethical clearance The ethics committee of our centre approved this study by giving an ethical clearance with protocol number 19-11-1313.Written informed consent was obtained from the patients for publication and any accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Results Patient From 2010 to 2021, 91 liver resection procedures were done.From those procedures, a total of 201 liver segments were resected.Most (54 out of 95) procedures were done on the liver's right lobe (segments V-VIII).The left lobe is removed partially or completely in 40 cases.Central hepatectomy is the rarest procedure, with only one procedure done.Segment III is the most often removed segment, with 37 removals, followed by segment II, with 35 cases.The complete recap of all procedures done can be seen in Fig. 1. Mortality rate We found that eight resection patients died during their stay in the hospital.The first five mortalities happened during the earlier half of the decade; the cause of death was not recorded.During the latter half, there were three mortalities: one in 2017 and the other two during the COVID-19 pandemic.All three deaths are caused by respiratory failure secondary to postoperative pneumonia.A quarter of post-resection patients died in the first three months post-resection (Table 2).The cumulative mortality rate subsequently increased by 20% in the first and third years.Only 10 (out of 45) patients survived five years after the liver resection. Factors contributing to mortality Three parameters show statistically significant 1-year and alltime mortality results: largest tumour diameter, resection margin, and tumour differentiation (Table 3).Patients with tumours sized > 10 cm are ten times more likely to die in the first 90 days [odds ratio (OR) 10.89; 95% CI: 1.27-93.07,P = 0.01] and fourteen times more likely in the cumulative first year (OR 14.06; 95% CI: 2.59-76.35,P < 0.01) than patients with less than 3 cm tumour.This finding is not found in the first-year mortality if the first 90 days were removed.We also found that the number of segment resected are only statistically significant in the first 90 days. There is no sign of multicollinearity between all three variables submitted to the multivariate analysis (all variance inflation factor (VIF) < 5).The Omnibus test reveals that the model is a good fit with an overall significance of P < 0.01.Tumour size of greater than 10 cm yields the highest mortality risk increase (compared to the reference) with a 6.35 hazard ratio (95% CI: 2.13-18.93,P < 0.01).The hazard ratio decreases stepwise for patients with 5-10 cm tumours (hazard ratio 3.67 95% CI: 1.19-11.34,P = 0.02) and 3-5 cm tumours (hazard ratio 2.62 95% CI: 0.28-8.32,P = 0.10).Another statistically significant result is a positive resection margin with a 1.81 times risk increase (95% CI: 1.04-3.14,P = 0.04) compared to those with a negative margin at any given time point.The tumour differentiation is statistically insignificant. Discussion In this cohort study, we found that most HCC patients undergoing resection are middle-aged males with chronic Hepatitis B. Patients undergoing liver resection have better clinical parameters (Child-Pugh class) than those undergoing all treatment modalities [4] .This finding is reflected in the mortality rate; patients undergoing all treatment modalities have an overall worse mortality rate [4] .However, mortality rates found in this paper are still high, worse than those found in other Asian countries (46-69.5% 5-year mortality rate) [21] .This difference in mortality rate may be caused by a larger portion of cases in this paper having large ( > 5 cm) or multiple tumours, possibly due to the long waiting time [22] .We found that the largest diameter of the tumour, number of segments resected, and tumour differentiation are statistically significant predictive factors for postoperative (90 days) mortality.In addition, the largest diameter of the tumour, positive resection margin, histopathological grading (Edmondson's), and tumour differentiation are statistically significant independent predictive factors for cumulative 1-year and all-time mortality.Interestingly, only the surgical margins are statistically significant in our 1-year analysis if mortalities that happened during the first 90 days are removed.This shows the huge impact the first 90 days have on overall survival. Predictive factors for mortality in HCC cases are highly contested in previous studies.For example, five studies agreed with our findings that the largest diameter of tumours is an important predictive factor [22][23][24][25][26] .However, five other studies found it not to be statistically significant [27][28][29][30][31] .This large disparity in findings is possibly due to differences in classification, surgical considerations, protocols, and demographic characteristics.Therefore, future studies or reviews may need to take these differences into account. Based on the HR, tumour size is this paper's most important predictor of mortality.Our findings are similar to Wu et al. [32] 's study in 2018, which shows that tumour size is an independent predictor of survival.Larger tumours require more liver segments to be removed, which is a significant predictor of postoperative mortality in this and other studies [33] .Tumour size is also related to tumour differentiation, with extensive or faster-growing tumours having poorer differentiation [34] .However, our analysis found no proof of multicollinearity between tumour size and differentiation (VIF = 1).Tumour differentiation was a significant prognostic factor in our univariate analysis.This result is similar to another study, favoring HCC cases with better tumour differentiation [35] . Positive surgical margin is also an important predictive factor for mortality (according to the survival analysis).In contrast to our postoperative analysis, a statistically significant result is also found in our long-term and cumulative bivariate analysis.This phenomenon may be due to positive surgical margins having a higher risk of tumour recurrence after 1 year [36] .Although a wider resection margin than what is described in this study ( > 1 cm) may lead to better long-term overall survival [16,37] , ensuring a negative 1 cm surgical margin is not always possible because of anatomical and functional limitations.It should be noted that other studies might use a 1mm or no-ink margin.Blind enlargement of the surgical margin is also not recommended because it could harm normal liver parenchyma [38] .Our centre has improved by implementing intraoperative ultrasonography in 2020; we achieved a 100% negative surgical margin rate in 2021. Significant predictive factors for mortality found in this study could benefit from earlier diagnosis and treatment; especially given that tumour volume doubling time is 3-4 months, and even more aggressive among Asians due to the higher prevalence of hepatitis B [32] .A good surveillance program for at-risk populations may lead to an improvement in survival by finding HCC patients when they have smaller tumours with better differentiation, both of which are important survival factors in our study and other studies on liver resection [7][8][9]39,40] . HCC paients found at an earlier stage are also eligible for other curative treatment modalities with better survival [41] . This study is the first long-term evaluation of liver resection for HCC in Indonesia.The main limitation of this study is inadequate documentation of subsequent follow-ups post-surgery.As a national referral hospital, some patients are referred from other cities.After resection, the patients may receive further care locally or be unable to travel for subsequent follow-ups.Therefore, this study did not consider adjunctive treatments, clinical conditions on time points post-resection, recurrence, metastasis, and the cause of death.Another important limitation is that some data -including surgical complications, post-surgical complications, and in-hospital deaths, are not properly recorded because it is older than 5 years (Indonesia's standard active medical record retention period). Our centre plans to create a registry of HCC patients undergoing resection to overcome this issue.An HCC registry could also be helpful to evaluate and monitor previously insufficiently documented variables, such as intrahepatic metastasis, recurrence, portal vein thrombosis, and invasions to nearby structures.Further studies should use standardized admission, surgical, and discharge forms emphasizing textbook liver outcomes. Conclusion One-decade liver resection mortality rate for HCC patients in an Indonesian tertiary-level hospital is high.We found that the largest tumour diameter, positive resection margin, and tumour differentiation are statistically significant independent predictors for mortality according to bivariate and survival analyses.Most of those prognostic factors benefit from earlier diagnosis and treatment. Figure 1 . Figure 1.Liver resection procedures done and liver segments resected from 2010 to 2021. Table 1 Characteristics of HCC patients assigned to liver resection Table 1 ( Continued) 's characteristicsMost patients were middle-aged (median age: 54) males (74.7%).The aetiology of HCC is primarily chronic viral hepatitis infection (62.6%), of which 53.8% of the total cases were caused by Hepatitis B. (Table1).As the CP classification shows, most resection patients have good liver function; 85.7% of HCC patients are CP Class A. The median diameter of the tumours before resection is 7 cm, and most patients (67.0%) have multiple tumours.We estimated that 70% of our resections are anatomical, with the rest being non-anatomical.Almost a fourth of post-resection patients have traces of malignant tissue within 1 cm of the surgical margin.However, all resections done in 2021 had negative surgical margins.The median histopathological classification is grade 3 (Edmondson's) with moderate differentiation.Some data are missing from vascular invasion results, especially those before 2013. Table 2 The mortality rate of hepatocellular patients post-resection a Excluding mortality which occur in the first 90 days. Table 4 Cox regression of hepatocellular patients post-resection *values are statistically significant.ref. reference.
2024-02-09T16:09:10.029Z
2024-02-08T00:00:00.000
{ "year": 2024, "sha1": "7fff2fd4ec0ff089139876e4a6feb5a7c3608f51", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/ms9.0000000000001746", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fae37396717ba23875c630f7ae49a084b5b8cb29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225251356
pes2o/s2orc
v3-fos-license
Research on the Development of Exhibition Industry in Wuhan Under the COVID-19 A disaster outbroke suddenly in Wuhan in December 2019. The tourism and exhibition industry is getting into trouble together as one of the three main economic industries. The epidemic situation has a significant impact on the economy and society in China, and even the global economy. As a result, the convention and exhibition industry has been influenced, even stop the work. In addition, a lot of meetings were canceled and delayed. The COVID-19 not only impacted the exhibition industry but also has challenges and opportunities. In the end, the research also gives some suggestions for the exhibition industry in the future. INTRODUCTION The convention and exhibition industry develops rapidly since the 1990s in China, but now it has enormous potential and strong power in the world, even though it was a later starter. With the continuation of the thorough reform of the economic system and the reform and opening-up policy, the convention and exhibition industry had entered a period of revolution and significant development after The Third Plenary Session of the 11th Central Committee of the Communist. Furthermore, the Chinese focused on the organization and management of the exhibition in the other country before the mid-1980s. China's exhibition industry become turn into international until they participated in the "Basel Sample Fair Event" in Switzerland. In 1999, China successfully held the 184 daysWorl Horticultural Exposition in Kunming. It means this is a new stage for the development of the Chinese exhibition industry. Up to now, the conference and exhibition industry has become the "bread" and the "business card" for the city and the country. Like a rising sun and the service industry, exhibition, tourism, and reality are the biggest three economic industries. Exhibition Venues in Wuhan Wuhan has a lot of competitive exhibition centers. The first one is the Wuhan International Convention & Exhibition Center, which is built-in 1956. This is the 4th biggest and comprehensive exhibition in China after Beijing, Shanghai, and Guangzhou at that time. It is 127,000 square meters in total, including 5,000 square meters for the exhibition hall. There are 2800 international standard booths can be set up. In addition, Wuhan Science and Technology Convention Center, which is located in east lake. It also is a large, intelligent, and multi-functional comprehensive convention center. It has not only a perfect conference hall but also includes office buildings, restaurants, hotels, apartments, and other supporting facilities. But this still does not has strong competitive advantages rather than other cities. Up to the built of the Wuhan International Expo Center in 2011, it means Wuhan is the main exhibition city in central China. Because it is the biggest exhibition in central city and the third in China at the moment. Wuhan International Expo Center can provide 6,880 standard international booths, which is 180,000 square meters. It is bigger than the 18 football field. Besides that, it also has the famous five-star hotel-Intercontinental hotel, Greenland center, music fountain, and the great landscape. In general, Wuhan's exhibition venues have strong competitiveness. The Status of Wuhan's Exhibition Industry in China According to the survey, Wuhan International Convention and Exhibition Center, Wuhan Cultural Expo Center, China Optical Valley Science and Technology Center, Wuhan International Expo Center hosted 1,600 exhibitions and received more than 10 million exhibitors and visitors in 2017, not including those small pavilions. However, according to the ranking of "The most Competitive Exhibition cities in China" released by The China Conference and Exhibition Economics Research Institute in 2017, Wuhan is not among the first-tier exhibition cities. In the final analysis, the reason is the exhibition scale is small although Wuhan has held many exhibition venues and received many people. Up to the 7th CISM Military World Games held successful in Wuhan in October 2019. As a result, Wuhan became a first-tier exhibition city and attracted worldwide attention. The Impact for Chinese Exhibition Industry Because of the COVID-19 and in order to manage the outbreak effectively, all companies stopped their works. For the exhibition industry, all exhibitions have been a delay and cancel. As a result, COVID-19 effect the exhibition industry seriously. According to the Chinese exhibition economy seminar COVID-19 outbreak impact on the global international convention industry research report, the data from February to June in 2020 from the international conference, shows that the COVID-19 mainly influence the Asia-Pacific countries, especially in China [1]. It canceled almost 86, changed 44 international meetings during the first 6 months this year. Changing proportion is as high as 51.16%, which is the topper. According to this survey, it also shows if the COVID-19 is controlled and not come back at the end of this year, the global international conferences will resume at the end of 2020. But the conferences in Asia-Pacific countries will not recover especially in China for the whole year. The Influence of Wuhan's Eexhibition industry Wuhan was the city most affected by COVID-19, all the industry and company have effected in January of 2020. THE FUTURE PROSPECT OF WUHAN'S EEXHIBITION INDUSTRY UNDER THE COVID-19 The cities in China have basically returned to work because of the control of the COVID-19. The epidemic has not only brought a great impact on the exhibition industry but also brought opportunities. For instance, resource integration, industrial upgrading, and development of the online exhibition industry. Wuhan should establish and develop online exhibitions like Bejing. It can promote Advances in Social Science, Education and Humanities Research, volume 466 innovation of exhibition ways and reduced the influence of COVID-19 for the exhibition industry. What's more, this is a new opportunity for exhibitors. Due to the impact of the epidemic, exhibitors also need a place to show their products, advertising, find and maintain a good relationship with customers, to reduce their financial and economic pressures. However, it is just a few online platforms for the Wuhan convention and Exhibition industry, so the construction of the e-commerce platform needs to be improved. In addition to building online exhibition platforms, we also need developing technology and information networks to ensure digital transformation, intelligent upgrading, and integrated innovation [3]. With the development of live broadcasting and 5G networks, offline exhibition hall and online platform should be combined. The exhibition hall is not only a real existence but also is a live broadcasting platform and information distribution center. From the government perspective, policies should be formulated to support the development of the exhibition industry [4]. The companies can start to work based on safety and prevention well. Secondly, the role of the industry association is also very important to strengthen the leadership and coordination role of the industry association. As a city with 1 million college students, and has more than 80 universities and colleges, which have competitive advantages in talent cultivation. At present, compared with tourism and hospitality management, there are just several universities or colleges that have exhibition management, so the construction of exhibition management should be strengthened. Finally, every exhibition hall should strengthen its own technology, conform to the advent of the 5G era, and plan a new model of integrated development of online and offline [5]. CONCLUSION COVID-19 not only has a great impact on the exhibition industry in Wuhan but also brings opportunities to the industry. The epidemic is a major change in the exhibition industry, and the limitations of offline are gradually exposed. The whole industry is thinking about how to develop the exhibition in the future. In addition, the conference and exhibition industry should make full use of 5G, VR/AR, big data, and other modern information technology, combine online and offline, and strive to develop new models.
2020-09-10T10:20:25.966Z
2020-08-28T00:00:00.000
{ "year": 2020, "sha1": "a71c4f0c76313ac074c77e9bec32e1cad6ab612f", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125944143.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "56b258b32c03551c008bc540e4ec6b32a67c1a3c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
258392888
pes2o/s2orc
v3-fos-license
The Effect of Rainfall and Illumination on Automotive Sensors Detection Performance : Vehicle safety promises to be one of the Advanced Driver Assistance System’s (ADAS) biggest benefits. Higher levels of automation remove the human driver from the chain of events that can lead to a crash. Sensors play an influential role in vehicle driving as well as in ADAS by helping the driver to watch the vehicle’s surroundings for safe driving. Thus, the driving load is drastically reduced from steering as well as accelerating and braking for long-term driving. The baseline for the development of future intelligent vehicles relies even more on the fusion of data from surrounding sensors such as Camera, LiDAR and Radar. These sensors not only need to perceive in clear weather but also need to detect accurately adverse weather and illumination conditions. Otherwise, a small error could have an incalculable impact on ADAS. Most of the current studies are based on indoor or static testing. In order to solve this problem, this paper designs a series of dynamic test cases with the help of outdoor rain and intelligent lightning simulation facilities to make the sensor application scenarios more realistic. As a result, the effect of rainfall and illumination on sensor perception performance is investigated. As speculated, the performance of all automotive sensors is degraded by adverse environmental factors, but their behaviour is not identical. Future work on sensor model development and sensor information fusion should therefore take this into account. Introduction The causes of traffic accidents can be assigned to three reasons: driver-related, vehiclerelated and environment-related critical causes [1].According to the Stanford Center for Internet and Society, "ninety percent of motor vehicle crashes are caused at least in part by human error" [2].Thus, in order to eliminate driver-related factors, the demands of autonomous driving vehicles have primarily driven the development of ADAS.Furthermore, vehicle-related factors are mainly related to the robustness of vehicle components.For example, if the data coming from the sensors are not accurate or reliable, they can corrupt everything else downstream in ADAS.Finally, environment-related factors also raise challenges for road safety.For example, in a dataset of traffic accidents collected in a Chinese city from 2014 to 2016, approximately 30.5% of accidents were related to harsh weather and illumination conditions [3].For these reasons, automotive manufacturers are placing a very high priority on the development of safety systems.Therefore, a reliable and safe ADAS can prevent accidents and reduce the risk of injury to vehicle occupants and vulnerable road users.To fulfil this requirement, the sensors must be highly robust and operate in real time while also being able to cope with adverse weather and lighting conditions.As a result, multi-sensor fusion solutions based on Camera, LiDAR and Radar are widely used in higher-level automation driving for a powerful interpretation of a vehicle's surroundings [4][5][6]. According to road traffic accident severity analysis [3,7,8], late-night and adverse weather accidents are more fatal than other traffic accident factors.Driving at night under low illumination conditions and rainfall proved to be the most important, leading to the highest number of accidents, fatalities and injuries.In the state of the art, some studies have outlined the impact of the aforementioned environmental factors on sensor performance [9][10][11].For example, for illumination conditions, LiDAR and Radar are active sensors which are not dependent on sunlight for perception and measurements as summarized in [9].In contrast, the Camera is a passive sensor affected by illumination, which brings up the problem of image saturation [12].The Camera is mainly responsible for traffic lane detection, which is formed by the difference in grey values between the road surface and lane boundary points.Namely, the value of the grayscale gradient varies according to the illumination intensity [13].The study of [14] demonstrated that artificial illumination is a factor in detection accuracy.Meanwhile, object detection used in ADAS is also sensitive to illumination [15].Therefore, it is important to build a system with multiple systems without depending on a single sensor. Unlike the effects produced by illumination, which only have a greater influence on the Camera.the negative effects of rainfall must be taken into account in all vehicle sensors.Rainfall is a frequent adverse condition and it is necessary to consider the impact on all sensors.In [16], raindrops on the lens can cause noise in the captured image, resulting in poor object recognition performance.Although the wipers eliminate raindrops to ensure Camera perception performance, the sight distance values vary with the intensity of the rainfall to the extent that the ADAS function is suspended [17].Furthermore, in other studies and analysis results on the influence of rainfall on the LiDAR used in ADAS, all sensors demonstrate sensitivity to rain.At different rainfall intensities, the laser power and number of point clouds decrease, resulting in reduced object recognition as the LiDAR perception is dependent on the received point cloud data [18,19].This effect is mostly caused by water absorption in the near-infrared spectral band.Some experimental pieces of evidence indicate that rainfall reduces the relative intensity of the point cloud [10].Although Radar is more environmentally tolerant than LiDAR, it is subject to radio attenuation due to rainfall [20].Compared to normal conditions, the simulation results show that the detection range drops to 45% under heavy rainfall of 150 mm/h [21].A similar phenomenon is confirmed in the study of [22].A humid environment can cause a water film to form on the covering radome, which can affect the propagation of electromagnetic waves at microwave frequencies and lead to considerable loss [23].Meanwhile, the second major cause of Radar signal attenuation is the interaction of electromagnetic waves with rain in the propagation medium.Several studies have obtained quantitative data demonstrating that precipitation generally affects electromagnetic wave propagation at millimetre wave frequencies [24,25].Therefore, the negative impact of rainfall directly affects the recognition capability of the perception system, which results in the ADAS function being downgraded or disabled. No sensor is perfect in harsh environmental conditions.There are already several scientific studies showing the experimental results of the sensors in different environments and giving quantitative data.However, in most cases, these experiments are carried out at static or indoor conditions [10,11,19,26,27], which makes it difficult to comprehensively evaluate the performance of the sensor based on these laboratory data alone.This is because for the actual road traffic environment, vehicles equipped with sensors drive dynamically and ADAS is also required to cope with various environmental factors at different speed conditions.To compensate for the limitations of the current implementation, in this study, we design a series of dynamic test cases under different illumination and rainfall conditions.In addition, we consider replicating more day-to-day traffic scenarios, such as cutting in, following and overtaking, rather than a single longitudinal test.The study statistically measures sensor detection data collected from a proving ground for autonomous driving.Thus, a more comprehensive and realistic comparison of experimental data from different sensors in adverse environments can be conducted and we discuss the main barriers to the development of ADAS. The outline of the subsequent sections of this paper is as follows: The proving ground and test facilities are introduced in Section 2. Section 3 presents the methodology for test case implementation.Section 4 demonstrates the statistics from real sensor measurement and evaluation for main automotive sensors.Limitations of sensors for ADAS are discussed in Section 5. Finally, a conclusion is provided in Section 6. Test Facilities The proving ground and test facilities are introduced through our measurements conducted at the DigiTrans test track.This proving ground is designed to replicate realistic driving conditions and provide a controlled environment for testing autonomous driving systems.The test track enables the simulation of various environmental conditions to test the detection performance of the sensors under different scenarios.Further details regarding the test track are provided in Section 2.1.Section 2.2 introduces three commonly used sensor types tested in our experiments.These sensors are widely used in the current automotive industry and their detection performance under adverse weather conditions is of great interest.We introduced a ground truth system in Section 2.3 to analyze the sensor detection error.This system allowed us to evaluate the accuracy and reliability of the sensors in detecting the surrounding environment. Test Track DigiTrans is a test environment located in Austria that collaborates with national and international partners to furnish expertise and testing infrastructure while supporting testing, validation, research and implementation of automated applications within the realm of municipal services, logistics and heavy goods transport.DigiTrans expanded the decades-old testing site in St. Valentin (see [28]) in multiple phases to meet the demands of testing automated and autonomous vehicles. In our study, we are focused on the influence of adverse weather conditions on automotive sensors.Namely, testing these technologies in a suitable, realistic and reproducible test environment is absolutely necessary to ensure functional capability and to increase the road safety of ADAS and AD systems.To create this test environment, DigiTrans has built a unique outdoor rain plant (see Figure 1) to provide important insights into which natural precipitation conditions affect the performance of optical sensors in detail and how to replicate the characteristics of natural rain.The outdoor rain plant covers a total length of 100 m with a lane width of 6 m.It is designed and built to replicate natural rain characteristics in a reproducible manner.Figure 2 shows a cross-section of the rain plant in a longitudinal driving direction.The characteristics of rain are predominantly delineated by its intensity, homogeneity distribution, droplet size distribution and droplet velocities.Rain intensity refers to the average amount of water per unit of time (e.g., mm/h).Homogeneity distribution provides information on the spatial distribution of rain within a specific wetted area, with the mean value of homogeneity distribution being the intensity.Droplet sizes are measured as the mass distribution of different droplet sizes within a defined volume, with typical diameters ranging from 0.5 to 5 mm and the technical information can be found in [29].It should be mentioned here that natural rain droplets have no classical rain-teardrop shape [30].The fourth characteristic is droplet speed, which varies according to droplet size and weight-todrag ratio, as described in [31].The present study conducted tests under two different rain intensities, namely 25 and 100 mm/h, corresponding to mid-and high-intensity rain based on internationally accepted definitions [32]. Tested Sensors In our research, we primarily focus on the performance of three sensors (Camera, Radar, LiDAR) widely used in the automotive industry.Table 1 shows their specific parameters and performance.In the experiment, sensors and measurement systems were integrated into the measurement vehicle, referred to as the "ego car", shown in Figure 3.Meanwhile, the vehicle's motion trajectory and dynamics data recording were found using GPS-RTK positioning (using a Novatel OEM-6-RT2 receiver) and the GENESYS Automotive Dynamic Motion Analyser (ADMA).The GPS-RTK system was also used to provide global time synchronization, ensuring that the sensor detection data transmitted on the bus were aligned, making it easier to post-process the data. Ground Truth Definition In Figure 3, the RoboSense RS-Reference system is mounted on the roof of the vehicle, which provides ground truth data in the measurement.Hence, we quantify the detection errors of the sensors using the ground truth data.It is a high-precision reference system designed to accurately evaluate the performance of LiDAR, Radar and Camera systems.It provides a reliable standard for comparison and ensures the accuracy and consistency of test results.The RS-reference system uses advanced algorithms and sensors to provide precise and accurate data for multiple targets.This allows for high-accuracy detection and tracking of objects, even in challenging environments and conditions.Due to the involvement of multiple vehicles in our test scenarios, the use of inertial measurement systems relying on GPS-RTK is unsuitable as these systems can only be installed on one target vehicle.In our case, multiple object tracking is essential, which is why we opted for RS-reference. To validate the measurement accuracy of the RS-reference system, we compared it with the ADMA.The latter enables highly accurate positioning with an accuracy of up to 1 cm.Therefore, ADMA is used as a benchmark to verify the accuracy of RS-Reference.The target and ego cars are equipped with ADMA testing equipment during the entire testing process.Additionally, the RS-Reference is also installed on the ego car.Due to the difference in the reference frame, the reference benchmarks for ADMA and RS-Reference on the ego car are calibrated to the midpoint of the rear axle.On the target car, the ADMA coordinate system was transformed to the centre point of the bumper to serve as a reference, which is consistent with the measurement information provided via the RS-Reference.The accuracy information is summarized in Table 2. RS-Reference does not perform as accurately as ADMA in longitudinal displacement errors.In light of our designed test cases, multiple targets must be tracked.With limited testing equipment, RS-Reference can meet multi-target detection needs, which provides excellent convenience for subsequent post-processing.Therefore, the measurement information from RS-Reference can serve as ground truth to support evaluating other sensors' performance. Test Methodology The methodology for test case implementation involved a total of seven different manoeuvres included in the overall manoeuvre matrix.These scenarios were carefully selected to replicate real-life driving situations.To cover a common repertoire of manoeuvres, each test scenario considers both low and high vehicle speeds.One main research question of this project is to assess the influence of different day and weather conditions on sensors such as LiDAR, Radar and Camera.Figure 4 shows the test matrix with all day and weather conditions.The examined experiments were performed during the daytime on a dry/waterlogged road.Meanwhile, The tests were also conducted under moderate and heavy rain.For nighttime conditions, all tests were conducted under conditions with good artificial lighting.The combinations of weather conditions were, respectively, dry road and simulated moderate and heavy rain.Therefore, all scenarios were performed on the dynamic driving track underneath the rain plant of the proving ground (see Section 2), thus ensuring consistent experimental conditions.Finally, 276 test cases were performed.Each weather condition consists of seven manoeuvres with two variations in speed and each variation was repeated three times to allow an additional statistical evaluation.To get as close as possible to automated driving behaviour, each vehicle that was equipped with Adaptive Cruise Control (ACC) had it activated while driving.Hence distance maintenance to the front target and acceleration or deceleration of the vehicle were controlled by ACC.Since the rain simulator is only 100 m long; the main part of the manoeuvre should be performed under the rain simulator.Table 3 illustrates the entire test matrix with pictograms and a short description. Accelerate Leaving The target vehicle in front accelerates and drives away from the ego vehicle, this case represents a drive off from traffic lights or stop sign. Accelerate Approach This case depicts ego car heading towards and approaching traffic congestion or slow-moving traffic.Ego car with ACC function automatically slows down and maintains a safe distance from the target car. Manoeuvre Pictogram Short Description Lateral Leaving An evasive manoeuvre or a lateral lane change are presented.The test speed covers different situations from low speed to high speed and the target car performs double lane change after reaching the preset speed. Cut-in Cut-in is a common driving behavior.Ego cars with ACC function follow the front car while the side-by-side vehicle accelerates to overtake and cut in at a preset speed. Cut-out The opposite of the last described action is the cut-out.Under the condition of activating the ACC following function, the front car cuts out to the adjacent lane and performs the overtaking. Separation Test This scenario presents an ego car with ACC function approaching traffic congestion or waiting for a red light.It usually occurs in urban scenarios with multiple lanes. Platoon Test Vehicle platooning often occurs on highways and using the ACC function allows for smaller speed fluctuations to keep up with traffic. Results and Evaluation After a series of post-processing work episodes, we collected 278 valid measurement cases, each sensor containing more than 80,000 detection data, which provided the statistics from real sensor measurement and evaluation for main automotive sensors.In this section, we demonstrate the quantitative analysis for each sensor to show the detection performance.Since the distance of the rain simulator is only 80 m, the collected data are filtered based on GPS location information to ensure that all test results are produced within the coverage area of the rain simulator.For rainfall simulation, we split measurement into moderate and heavy rain conditions with intensities of 25 mm/h and 100 mm/h, respectively.Meanwhile, the artificial illumination condition is also considered in our test.Additionally, we also discuss the influence of detection distance on the results.However, due to the rain simulator's length limitation, environmental factors' effect on sensor detection is not considered for this part of the presentation of the results.As introduced in Section 3, the test scenarios have been divided into two parts: daytime and nighttime.The daytime tests are further divided into dry and wet road conditions, as well as moderate and heavy rainfall conditions, which will be simulated using a rain simulator.For the nighttime test, the focus is only on dry road conditions and moderate rainfall, given the test conditions.This approach thoroughly evaluates the systems' performance under different weather and lighting conditions. According to the guide to the expression of uncertainty in measurement [36], the detection error can be defined in Equation (1), where Data measured can be regarded as the sensor measurement output and the ground truth is labelled Data re f erence .In addition, the measurement error is denoted as ε uncertainty .After obtaining a series of detection errors for the corresponding sensors, we quantify the Interquartile Range (IQR) of the boxplot and the number of outliers to indicate the detection capability of the sensor. In this study, our sole focus is on different sensors' performance of lateral distance detection.This is because autonomous vehicles rely on sensors to detect and respond to their surroundings and lateral distance detection is essential in this process.An accurate lateral detection enables the vehicle to maintain a safe and stable driving path, which is critical to ensuring the safety of passengers and other road users compared to longitudinal detection-related functions.By continuously monitoring the vehicle's position in relation to its lane and the surrounding vehicles, an autonomous vehicle can make real-time adjustments to its driving path and speed to maintain a safe and stable driving experience.This information is also used by the vehicle's control systems to make decisions about lane changes, merging and navigating curves and intersections.A typical example is used in Baidu Apollo, the world's largest autonomous driving platform, providing trajectory planning via EM planner [37].Therefore, the results of other outputs from sensors are presented in Appendix A, while the focus of the following sections is solely on the performance of the sensor for lateral distance detection. Camera Cameras are currently widely utilized in the field of automotive safety.Hasirlioglu et al. [10] have demonstrated through a series of experiments that intense rainfall causes a loss of information between the Camera sensor and the object, which cannot be fully retrieved in real time.Meanwhile, Borkar et al. [14] have proven the presence of artificial lighting can be a distraction factor which makes lane detection very difficult.In addition, Koschmieder's model describes visibility as inversely proportional to the extinction coefficient of air, which has been widely used in the last century [38].This model can be conveniently defined in [39] by Equation (2). where x denotes the horizontal and vertical coordinates of the pixel, λ denotes the wavelength of visible light, β is the extinction coefficient of the atmosphere and d is the scene depth.Furthermore, I and J denote the scene radiance of the observed and clear images depending on x and λ, respectively.The last item A indicates the lightness of the observed scene.Therefore, by analyzing this equation, once the illumination and the extinction coefficient influence the image observed by the Camera sensor, the estimation of obstacles can lead to detection errors.These test results can be observed in Figures 5 and 6, which illustrate the Camera's lateral detection results.In general, having the wipers activated during rainfall can help to maintain a clear view for the Camera, making the detection more stable.However, the performance of the Camera's detection is still impacted by other environmental factors, such as the intensity of the rain and the level of ambient light.These factors can affect the image quality captured by the Camera, making it more difficult for the system to detect and track objects accurately.As shown in Table 4, the IQR increases as the rainfall increases.Specifically, the outlier numbers increase by at least 23% compared to the dry road conditions.It is also evident that the Camera is susceptible to illumination.In principle, the car body material's reflectivity is higher under sufficient lighting.Hence, the extinction coefficient decreases.Good contrast with the surrounding environment is conducive to recognition.Meanwhile, artificial lighting provides the Camera with enough light at night to capture clear images and perform accurate detection.In this scenario, there is a significant increase in outliers.For dry road conditions, the detected outliers are 55% higher at night than during the day.In the case of moderate rainfall, the number of outliers increased by 41.7%.The high uncertainty of detection at night leads to a decrease in average accuracy, as seen in Table 5. Comparing Figures 5 and 6, the Camera detection error with the smallest range of outliers is during the day and on a dry road.In addition, nighttime conditions with moderate rainfall are a challenge for Camera detection, where the outlier range is significantly increased in Figure 6b.Since the Camera is a passive sensing sensor, like most computer vision systems, it relies on clearly visible features in the Camera's field of view to detect and track objects.For a waterlogged road, the water can cause reflections and glare that can affect the image quality captured by the Camera and make it difficult for the system to process the information accurately.As a result, the average error is greatest in this condition as illustrated in Table 5.Finally, Figure 7 demonstrates the influence of distance on the detection results and it can be clearly seen that the effective detection range of the Camera is about 100 m.The error increases for detection results beyond this range and the confidence interval also grows, which makes the detection results unreliable. Radar In the last decade, Radar-based ADAS has been widely used by almost every car manufacturer in the world.However, In the millimetre wave spectrum, adverse weather conditions, for example rain, snow, fog and hail, can have a significant impact on Radar performance [21].Moreover, the study of [10] has demonstrated that different rain intensities directly affect the capability of an obstacle to reflect an echo signal in the direction of a Radar receiver, thus resulting in an impact on maximum detectable range, target detectability and tracking stability.Therefore, rain effects on the mm-wave Radar can be classified as attenuation and backscatter.The two mathematical models for the attenuation and backscattering effects of rain can next be represented by Equation (3) and Equation ( 4), respectively. where r is the distance between Radar sensor and the target obstacle, λ is the Radar wavelength, P t is the transmission power, G denotes antenna gain, and σ t denotes the Radar cross-section of the target.The rain attenuation coefficient γ is determined by rainfall rate and the multipath coefficient is V. From Equation ( 3), it can be seen that we need to consider the rain attenuation effects when calculating the received signal power P r ; this is based on the rain attenuation effects, pathloss and multipath coefficient. The relationship between the power intensity S t of the target signal and that of the backscatter signal S b is characterized according to Equation ( 4).It is essential to maintain the ratio of the two variables above a certain threshold for reliable detection.Where τ is pulse duration, θ BW denotes antenna beamwidth, c is the speed of light.However, The rain backscatter coefficient σ i is highly variable as a function of the drop-size distribution.Therefore, Radar will also consume more energy and cause greater rain backscatter interference from Equation ( 4).Rainwater could produce the water film on the Radar's housing and thus affect the detection effect, which can be observed in Table 6.Although the difference in IQR values between rainy and clear weather conditions is insignificant, the number of outliers increases during heavy rainfall.Overall, the Radar is not sensitive to environmental factors.In particular, illumination level does not affect the Radar's detection performance and the IQR values remain consistent with those during the day.Figures 5 and 6 demonstrate this phenomenon, the ranges of IQR and outliers are basically the same, but when the rainfall intensity is relative high, the rain backscatter interference leads to more outliers in Figure 6b.However, the average error of Radar's lateral detection results is larger compared to the Camera and LiDAR, as shown in Table 5.This is because the working principle of Radar is to emit and receive radio waves, which are less focused and have a wider beam width compared to the laser used by LiDAR.This results in a lower spatial resolution for Radar, posing a challenge for lateral detection.LiDAR uses the laser to construct a high-resolution 3D map of the surrounding environment.At the same time, the Camera captures high-resolution images that can be processed using advanced algorithms to detect objects in the scene.This makes LiDAR and Camera systems more suitable for lateral detection than Radar systems.Finally, comparing the performance of the Camera and LiDAR in Figure 7, Radar has the farthest detection distance of approximately 200 m, but the error increases as the distance increases. LiDAR In recent years, automotive LiDAR scanners have been autonomous vehicle sensors essential to the development of autonomous cars.A large number of algorithms have been developed around the 3D point cloud generated by LiDAR for object detection, tracking, environmental mapping or localization.However, LiDAR's performance is more susceptible to the effects of adverse weather.The studies of [19,40] tested the performance of various LiDARs in a well-controlled fog and rain facility.Meanwhile, these studies verified that as the rainfall intensity increases, the number of point clouds received by the LiDAR decreases, which affects the tracking and recognition of objects.This process can be summarized by LiDAR's power model in Equation (5). This equation describes the power a received laser returns at a distance r, where E p is the total energy of a transmitted pulse laser and c is light speed.A represents receiver's optical aperture area, η is the overall system efficiency.β denotes the reflectivity of the target's surface, which is decided by surface properties and incident angle.This last item can be regarded as the transmission loss through the transmission medium, which is given by Equation (6). where α(x) is the extinction coefficient of the transmission medium; extinction is due to the fact that particles within the transmission medium would scatter and absorb laser light.From a short review of Equations ( 5) and ( 6), we can infer that the rainfall enlarges the transmission loss T(R) and hence leads to a decrease in the received laser power P(R), which makes the following signal processing steps fail.In fact, the performance of the LiDAR is degraded due to changes in the extinction coefficient α and the target reflectivity β.Most of the previous studies focused on the statistics of point cloud intensities; the point cloud intensity decreases with rain intensity and distance.However, object recognition based on deep learning is robust and can resist well the impact of environmental noise on the accuracy of the final results.This phenomenon can be observed from our statistics shown in Figures 5 and 6.Although the list of objects output by LiDAR is less influenced by the environment, there are still performance differences.In Table 7, it can still be seen that dry road conditions are indeed the most suitable for LiDAR detection and the difference in IQR between daytime and nighttime is insignificant.Since the tests under the rain simulator are all close-range detection, the results on wet road surfaces are not much different from those on dry surfaces.However, once the rain test started, the difference was noticeable.Raindrops can scatter the laser beams, causing them to return false or distorted readings.This can result in reduced visibility, making it more difficult for the system to detect objects and obstacles on the road.Therefore, as the amount of rain increases, LiDAR detection becomes more difficult.Especially in heavy rain, where the IQR increased by 0.156 m compared to when tested in dry conditions.Additionally, the number of outliers also significantly increases.Furthermore, Figure 5c,d also demonstrate the phenomenon, with a larger varying range of outliers under rainy conditions.Meanwhile, The range of outliers covers the entire observed range of the boxplot in Figure 6b.The influence of rain on LiDAR performance is evident.In Figure 7c, it can be seen that the detection range of LiDAR can reach 100 m.However, the effective range of 16-beam LiDAR for stable target tracking is about 30 m. Larger than this range, tracking becomes unstable and is accompanied by missing tracking.This is because the algorithms may use a threshold for the minimum signal strength or confidence level required to recognize an object, which limits the maximum range of the object recognition output.In addition, considering the robustness and accuracy of the algorithm could filter out point cloud at long ranges due to the limited resolution and other sources of error, thereby reducing the computational requirements and potential errors associated with processing data at more distant ranges.Through this method, 16-beam LiDAR can provide higher resolution and accuracy over a shorter range, which is suitable for many applications, such as automated driving vehicles and robotics.Only the error is larger in the closer range, which is caused by the mounting position of the LiDAR.Since our tested LiDAR is installed in the front end of the vehicle, it is difficult to cover the whole object when the car is close to the target, which makes the recognition more difficult and less accurate.However, this problem gradually improves when the target vehicle is far away from the ego car. Discussion In this section, we discuss the observations and limitations of sensors for ADAS during the measurement under the rain simulator.By comparison using Table 5, we calculate the average detection error of the sensors for different environmental conditions.The comparison of lateral distance errors reveals that there is no significant difference between the errors of the Camera and LiDAR sensors, as the average detection error for both sensors is only 0.054 m and 0.042 m, respectively.Meanwhile, the conclusion drawn in the study of [40] is consistent with our findings, as changes in the propagation medium of the laser due to rain and fog weather adversely affect the detection.However, Radar detection is not as reliable as longitudinal detection, as indicated by the average error of 0.479 m.This is due to the small amount of point cloud data from the Radar and it is challenging to discern lateral deviations after clustering, as discussed in [10,20].Furthermore, the error results from different environments indicate that Radar is the least affected by environmental factors.Although Cameras are also less impacted by the rainfall, it should be noted that the tests were performed with the wipers on.Additionally, in the night test, our results demonstrated that the detection performance is enhanced by the contrast improvement at night with sufficient artificial light.Finally, while LiDAR has the highest detection accuracy, it is susceptible to the amount of rain and the accuracy difference is more than four times. To investigate the impact of distance on detection accuracy, we aggregated all test cases in Figure 7 statistically.LiDAR showed an extremely high accuracy rate.The mean error is observed to be merely 0.041 and the standard deviation is also effectively controlled.However, the effective detection range of LiDAR is only about 30 m.Beyond this range, target tracking is occasionally lost.Compared with Radar's effective detection range of up to 200 m, it is obvious that there are limitations in the usage scenario.However, the detection error of both Radar and Camera becomes larger as the distance increases.The Camera's average lateral error is 0.617 m, whereas the Radar exhibits a surprisingly high lateral error of 1.456 m, indicating a potential deviation of one lane as distance increases.This presents a significant risk to the accuracy of estimated target vehicle trajectories.Finally, by using uniform sampling, we calculated the detection error for each sensor under all conditions as summarized in Table 8. Conclusions Through a series of experiments, we have shown the impact of unfavourable weather conditions on automotive sensors' detection performance.Our analysis focused on lateral distance detection and we quantitatively evaluated the experimental results.Our studies demonstrated that rainfall could significantly reduce the performance of automotive sensors, especially for LiDAR and Camera.Based on the results presented in Table 5, it can be inferred that LiDAR's detection accuracy diminishes by a factor of 4.8 as the rainfall intensity increases, yet it still exhibits a relatively high precision.In contrast, the Camera's performance experiences less variation in rainy weather, with a maximum reduction of 1.57 times.However, as the Camera is significantly affected by lighting conditions, its detection accuracy declines by 4.6 times in rainy nighttime conditions compared to clear weather conditions.Additionally, the detection error fluctuation of Radar was slight but lacked lateral estimation accuracy.In the same weather conditions, Radar exhibits detection accuracy that was on average 16.5 and 14 times less precise than Camera and LiDAR, respectively. Furthermore, we conducted a series of nighttime tests that illustrated the positive effect of high artificial illumination on Camera detection.These experimental findings provide essential insights for automotive manufacturers to design and test their sensors under various weather and lighting conditions to ensure accurate and reliable detection.Additionally, drivers should be aware of the limitations of their vehicle's sensors and adjust their driving behaviour accordingly during adverse weather conditions.Overall, the detection performance of different automotive sensors under environmental conditions provides valuable data to support sensor fusion.For instance, while LiDAR has a maximum effective detection range of around 100 m, tracking loss occurs beyond 30 m.Thus, to address the limitations of individual sensors, multi-sensor fusion is a promising approach. As part of our future work, we aim to conduct a more in-depth analysis of the raw data obtained from automotive sensors and introduce more rain and Illumination conditions, for example, introducing more tests for rain and artificial light intensity variation.Raw data are critical inputs to the perception algorithm and often have a significant impact on the final detection output.We particularly want to investigate the effects of rainfall on LiDAR's point cloud data, as it can significantly impact detection accuracy.Additionally, we plan to explore the development of a sensor fusion algorithm based on the experimental results.By combining data from multiple sensors, sensor fusion can compensate for the limitations of individual sensors, providing a more comprehensive perception of the environment and enabling safer and more effective decision-making for autonomous driving systems.Therefore, our future work will focus on improving the accuracy and reliability of sensor data to enable more robust sensor fusion algorithms. Figure 1 . Figure 1.Overview of DigiTrans test track in St. Valentin, Lower Austria, with outdoor rain plant © DigiTrans GmbH. Figure 2 . Figure 2. Cross-section of outdoor rain plant in a longitudinal direction with rain characteristics © DigiTrans GmbH. Figure 3 . Figure 3. Overview of sensors and measurement equipment integrated on the testing car. Figure 5 .Figure 6 . Figure 5.Effect of environmental condition on sensors at daytime in rain simulator (near range).(a) Lateral distance detection error under dry road conditions; (b) Lateral distance detection error under wet road conditions; (c) Lateral distance detection error under moderate intensity rain conditions; (d) Lateral distance detection error under heavy intensity rain conditions. Figure 7 . Figure 7. Camera, Radar and LiDAR detection performance over the distance.(a) Camera detection performance; (b) Radar detection performance; (c) LiDAR detection performance. Figure A2 .Figure A3 .Figure A4 .Figure A5 .Figure A6 . Figure A2.Effect of environmental condition on sensors at nighttime in rain simulator (near range) with artificial lightning.(a) Longitudinal distance detection error under dry road condition; (b) Longitudinal distance detection error under moderate intensity rain conditions. Table 1 . The tested sensors with their respective information. Table 2 . Accuracy of RS-Reference compared with GENESYS ADMA. Table 3 . Test cases description. Table 4 . Quantification of the statistics of Camera detection performance in Figures5 and 6. Table 5 . Average lateral detection accuracy comparison of sensors in rain simulator (near range) in meters. Table 6 . Quantification of the statistics of Radar detection performance in Figures5 and 6. Table 7 . Quantification of the statistics of LiDAR detection performance in Figures5 and 6. Table 8 . Average lateral detection accuracy comparison of sensors over the full range.
2023-04-29T15:02:58.269Z
2023-04-27T00:00:00.000
{ "year": 2023, "sha1": "f0433e424089cf5bfae3ff1ab08725e6a8eee2e5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/15/9/7260/pdf?version=1683207142", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3cf914b522c68aa316d9cee205175200071595db", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
906202
pes2o/s2orc
v3-fos-license
Titanium-niobium Oxides as Non-noble Metal Cathodes for Polymer Electrolyte Fuel Cells In order to develop noble-metal-and carbon-free cathodes, titanium-niobium oxides were prepared as active materials for oxide-based cathodes and the factors affecting the oxygen reduction reaction (ORR) activity were evaluated. The high concentration sol-gel method was employed to prepare the precursor. Heat treatment in Ar containing 4% H2 at 700–900 °C was effective for conferring ORR activity to the oxide. Notably, the onset potential for the ORR of the catalyst prepared at 700 °C was approximately 1.0 V vs. RHE, resulting in high quality active sites for the ORR. X-ray (diffraction and photoelectron spectroscopic) analyses and ionization potential measurements suggested that localized electronic energy levels were produced via heat treatment under reductive atmosphere. Adsorption of oxygen molecules on the oxide may be governed by the localized electronic energy levels produced by the valence changes induced by substitutional metal ions and/or oxygen vacancies. Introduction Polymer electrolyte fuel cells (PEFCs) offer many advantages, including high power density, high energy conversion efficiency, and lower operating temperatures.PEFCs are therefore suitable as power sources for vehicles and residential co-generation power systems.However, the use of Pt as a cathode electrocatalyst for PEFCs is problematic due to the high cost and limited availability of Pt, and insufficient stability of these catalysts.To successfully commercialize PEFCs, low-cost non-platinum cathode catalysts with high stability must be developed. Since Jasinski discovered the oxygen reduction reaction (ORR) activity of cobalt phthalocyanine [1], the search for promising non-platinum ORR catalysts has led to the development of several cobalt-and iron-containing catalysts [2,3].Approaches to enhance the activity of these catalysts include the use and optimization of carbon supports and heat treatment conditions.Heat treatment of iron salts adsorbed on carbon supports under ammonia gas is a recent breakthrough that produces catalysts with high ORR activities comparable to that of platinum-based catalysts [4].Despite significant improvement of the ORR activity of non-platinum catalysts, issues regarding their long-term durability remain unresolved. Based on the high stability of Group 4 and 5 metal oxide-based compounds in acidic media, low cost, [5,6] and lower solubility in acid solution compared to platinum-based catalysts, these compounds have piqued our interest as they are expected to be stable even under the conditions encountered at the PEFC cathode.Recently, we successfully synthesized oxide-based nanoparticles using oxy-metal phthalocyanines (MeOPc; Me = Ta, Zr, and Ti) as the starting material and multi-walled carbon nanotubes (MWCNTs) as the support as well as the electro-conductive material [7,8].However, carbon materials are easily oxidized at high potentials with a consequent decrease of the ORR activity due to degradation of the electron conduction paths [8].Thus, carbon-free electrocatalysts are required to achieve high durability of the oxide-based cathodes.To prepare noble-metal-and carbon-free cathodes, the basic approach is to combine electro-conductive oxides with oxides that possess ORR active sites. Previously, we prepared noble-metal-and carbon-free cathodes comprising niobium-titanium oxides with active sites and titanium oxides with magneli phase Ti4O7 as the electro-conductive material (i.e., TixNbyOz + Ti4O7) [9].The highest onset potential of TixNbyOz + Ti4O7 was ca.1.1 V versus the reversible hydrogen electrode (RHE).No degradation of the ORR performance of TixNbyOz + Ti4O7 was observed during the start-stop and load cycle tests in 0.1 mol•dm −3 H2SO4 at 80 °C, where these conditions are close to the operating conditions of the existing PEFC [10].Therefore, we successfully demonstrated superior durability of noble-metal-and carbon-free oxide-based cathodes under the cathode conditions of the PEFC. However, the ORR activities of the TixNbyOz + Ti4O7 catalysts were still low because these catalysts were prepared under argon containing 4% hydrogen at high temperature, 1050 °C, where Ti4O7 was generated by the reduction of TiO2.That is, the preparation conditions encouraged the formation of Ti4O7 but were not optimal for the formation of niobium-titanium oxides with active sites.Domen et al. demonstrated that Nb-doped TiO2 synthesized by the oxidation of Nb-doped TiN nanoparticles exhibited definite ORR activity and high long-term stability in acidic solutions [11].However, these catalysts contained carbon residues that functioned to improve the conductivity between the particle aggregates.The preparation conditions used in that study were thus not suitable for the formation of ORR active titanium-niobium oxides without carbon.Consequently, it is necessary to separately optimize the conditions for the formation of titanium-niobium oxides with active sites and the formation of electro-conductive oxides.In this study, we focus on the formation of active sites on titanium-niobium oxides using a high concentration sol-gel method.The factors that influence the ORR activity in the absence of a carbon support are evaluated.However, it is necessary to obtain sufficient electro-conductivity to evaluate the ORR activity of the titanium-niobium oxides.Even a glassy carbon (GC) rod is heat-treated in air, an insulating oxide film is not formed on the surface.Therefore, the GC rod is superior to use as a substrate for the working electrode.The present strategy utilizes pre-heat-treatment (600 °C in air for 10 min) to achieve sufficient electrical contact between the titanium-niobium oxides and the GC substrate.It is necessary to secure the sufficient electro-conductivity between oxide-based catalysts and conductive oxide support when carbon-free cathodes are prepared.For example, the electro-conductive oxide network is made preparations in advance.Then, after oxide-based precursor is supported on the network it is heat-treated to create the ORR active sites and to obtain sufficient electro-conductivity.In this study, the effects of the preparation conditions, such as the gas atmosphere and heat treatment temperatures, on the ORR activity of the titanium-niobium oxides employing a GC rod are evaluated. Characterization of Catalysts We prepared the titanium-niobium oxide samples with the charged total composition of Ti0.841Nb0.126O2.Figure 1 shows the X-ray diffraction (XRD) patterns of the titanium-niobium oxide samples prepared at 600, 700, 800, 900, and 1050 °C (a) in air and (b) in Ar containing 4% H2.The crystalline phase of the catalysts prepared by heat treatment in air at temperatures between 600 and 900 °C was identified as anatase TiO2 (JCPDS no.00-021-1272), indicating that the niobium atoms were incorporated into the TiO2 anatase structure.According to phase diagram of TiO2-Nb2O5 [12], Nb(V) ions dissolve into TiO2 rutile structure only below ca. 10 atomic % in this temperature range.On the other hand, quasi-stable phase, TiO2 anatase structure, can dissolve more Nb(V) ions.The phase transition from anatase to rutile occurred at temperatures above 900 °C.For samples prepared at higher temperatures, peaks corresponding to the rutile TiO2 (JCPDS no.00-021-1276) and TiNb2O7 (JCPDS no.1001270) phases were observed.This is because the Nb(V) ions that cannot dissolve in the TiO2 rutile structure forms complex oxides TiNb2O7 that is solid solution of TiO2 and Nb2O5.Simultaneously, Nb-containing phases such as TiNb2O7 appeared at 1050 °C.These results are consistent with previous observations [13].The crystalline phase of the samples subjected to heat treatment at 600 and 700 °C in Ar containing 4% H2 could be indexed to the TiO2 anatase structure.However, the samples prepared at temperatures above 800 °C under this reductive atmosphere could be indexed to rutile TiO2 with no Nb2O5 peaks.The shift of the XRD peaks to lower angles (Figure S1) with increasing treatment temperature suggested that the catalysts are substitutional solid solutions in which the niobium ions substitute titanium ions in the rutile TiO2 lattice.Compared to formation of the rutile phase above 900 °C for the samples heat-treated in air, the rutile was phase formed at 800 °C under reductive atmosphere.Thus, the transformation from the anatase to rutile phase occurred at lower temperature under reductive atmosphere.In addition, the substitutional solid solution (rutile phase) was stable up to 1050 °C under reductive atmosphere.The XRD analysis clearly demonstrated that the TiO2 rutile-based structure was more stable under reductive atmosphere than in air.This stabilization of the TiO2 rutile-based structure is not predicted from the viewpoint of thermochemistry.The role of heat-treatment under reductive atmosphere and doped niobium ions must be elucidated. Figures 2 and S2 show scanning electron microscopy (SEM) images of the titanium-niobium oxides prepared at 600 °C in air, and 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2.The SEM images demonstrate that the surface morphology of the titanium-niobium oxides depends on the heat treatment temperature.Very little difference in the surface morphology was observed for the samples prepared by heat treatment at 600 °C under different atmospheres.The particle size of the catalysts prepared at 600 °C was ca.several tens of nanometers.A significant change in the morphologies of the catalysts was observed with treatment at 800 °C, indicative of particle aggregation above 800 °C.Aggregation became progressive with increasing heat treatment temperatures.Thus, the surface area of the catalysts decreased with temperature, especially above 800 °C. Figure 3 shows photographs of the catalysts prepared at 600 °C in air, and 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2.The powder heat-treated at 600 °C was white, as expected from the wide bandgap of TiO2 (all samples treated in air were white).On the other hand, the samples heat-treated at 600 °C under reductive atmosphere had a light-blue color and the color deepened with increasing temperature.This color change suggests that there is some difference in the electronic energy levels of the samples prepared under reductive atmosphere relative to those prepared in air.Namely, the difference between the highest occupied and lowest unoccupied electronic energy levels decreases with increasing temperature.This color change suggests the development of a localized energy level of electrons in the bandgap of TiO2. Figure 4a shows the Ti 2p XPS spectra of the catalysts prepared at 800 °C in air and in Ar containing 4% H2.As anticipated, the Ti 2p XPS spectra revealed that Ti adopted the tetravalent state for the specimen heat-treated in air based on the 2p3/2 peak (TiO2; 458.8 eV [14]).On the other hand, a low valence state, i.e., Ti 3+ (Ti2O3; 456.8 eV [15]), was observed for the catalyst heat-treated at 800 °C under reductive atmosphere.The ratios of Ti 3+ /Ti 4+ calculated from areas of the XPS spectra of the specimens heat-treated at 800 °C in air and in Ar containing 4% H2 were 5.0% and 10%, respectively.The ratio of the specimen prepared under reductive atmosphere was twice as large as that prepared in air.In addition, the total atomic ratio of Nb/Ti is 0.15 according to the charged total composition of Ti0.841Nb0.126O2.The atomic ratios of Nb/Ti calculated from areas of the XPS spectra of the specimens heat-treated at 800 °C in air and in Ar containing 4% H2 were 0.43 and 0.23, respectively.Both ratios are larger than the total atomic ratio, suggested that the niobium ions accumulate the surface of the oxide particles.In particular, the Nb/Ti ratio of the specimen heat-treated in air was about three times larger than the total atomic ratio.As mentioned in XRD patterns, because the rutile TiO2 phase cannot dissolve the Nb(V) ions, the dissolved Nb(V) ions in the anatase TiO2 phase began to accumulate near the surface of the particles at higher temperature heat treatment. Figure 4b shows the Ti 2p XPS spectra of the catalysts prepared at 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2.Low valence states of Ti were observed for the catalyst heat-treated at 600 °C under reductive atmosphere, suggesting that the oxides underwent little reduction at 600 °C in Ar containing 4% H2 upon treatment for 10 min.Heat-treatment above 700 °C under reductive atmosphere resulted in the formation of low valence state Ti as shown in Figure 4. Figure 5a shows the dependence of the ratios of Ti 3+ /Ti 4+ (expressed as STi(III)/STi(IV)) calculated from areas of the XPS spectra of the specimens heat-treated under reductive atmosphere on the temperature.The ratio of Ti 3+ /Ti 4+ of the specimen prepared at 600 °C is 6.7%.Ti 3+ ions are produced by the substitution of the Nb 5+ ions with Ti 4+ ions of the TiO2 lattice.Figure 5b shows the Nb 3d XPS spectra of the catalysts prepared at 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2.The peak in the Nb 3d spectra shifted to higher binding energy (NbO2; 205.3 eV [16], Nb2O5; 207.1 eV [17]) with increasing heat treatment temperatures, in contrast with the Ti 2p peak.Therefore, the Nb 3d XPS spectra revealed that most of Nb ions were highest oxidation state, 5+.Thus, the state of the specimens can be expressed as Ti(IV)1−2xTi(III)xNb(V)xO2.If all Nb ions substitute Ti 4+ ions of the TiO2 lattice as Nb(V) ions, the composition is Ti(IV)0.74Ti(III)0.13Nb(V)0.13O2.Therefore, in that case, the ratio of Ti 3+ /Ti 4+ is calculated to be ca.18%.The ratio of Ti 3+ /Ti 4+ at 600 °C, ca.6.7%, was smaller than 18%, indicating that the Nb(V) ions did not sufficiently incorporate into the TiO2 lattice at 600 °C.As shown in Figure 5a, the ratio of Ti 3+ /Ti 4+ increased with increasing temperature from 600 °C to 700 °C and saturated around 10%.These results deduced that reductive heat-treatment above 700 °C induced the formation of low valence state Ti. Figure 6 shows the dependence of the atomic ratio of Nb/Ti calculated from XPS spectra of the specimens prepared under reductive atmosphere on the heat treatment temperature.The atomic ratio of Nb/Ti decreased with increasing temperature above 700 °C and approached the bulk value at 1050 °C.The XRD patterns revealed that the bulk phase transition occurred between 700 and 800 °C under reductive atmosphere.The XPS spectra indicated that the titanium ions near the surface were reduced and the Nb(V) ions near the surface incorporated into the TiO2 lattice at ca. 700 °C.Therefore, the phase transition was probably caused by a change in the valence of titanium.We previously demonstrated that tantalum and zirconium oxide-based catalysts had some oxygen vacancies that acted as active sites for the ORR [6].In case of the titanium-niobium oxide system, the low valence state of the metal ions does not always indicate the presence of oxygen vacancies.The low valence state of the metal ions can be achieved even in the absence of oxygen vacancies because the highest valence states of titanium and niobium are different.The relationship between the presence of oxygen vacancies and the active sites remains a topic for further study.It was difficult to evaluate the differences in the electronic state of the catalysts heat-treated under reductive atmosphere at temperatures between 700 and 1050 °C based on the XPS spectra, as shown in Figures 4b and 5a.Thus, the ionization potential of the specimens was used as a parameter to evaluate these differences.The ionization potentials of the specimens were measured using a photoelectron spectrometer surface analyzer in order to investigate the differences in the surfaces of the specimens heat-treated in reductive atmosphere at different temperatures.Figure 7a shows the relationship between the square root of the photoelectric quantum yield and the photon energy (that is, the photoelectron spectra of the specimens heat-treated at 800 °C in air or in Ar containing 4% H2).The square root of the photoelectric quantum yield increased linearly with an increase in the photon energy applied to each specimen.The slope of the straight line reflects the tendency of the photoelectron emission of the specimens, that is, the density of state of the electrons near the Fermi level.Fewer photoelectrons were emitted in the case of the catalyst prepared in air.The slope of the straight line for the specimen heat-treated in air, where TiO2 was identified on the sample surface by XPS, was apparently lower than that of the congener prepared under reductive atmosphere.It is remarkable that the slope of this plot was steeper for the specimen prepared in Ar containing 4% H2.The intersection between the straight line and the background line in the photoelectron spectra provides the threshold energy corresponding to the photoelectric ionization potential.The photoelectric ionization potential corresponds to the highest energy level of the electrons in the materials.The ionization potential is directly affected by the localized electronic levels of the lattice defects and impurities in the metal oxides, such as valence changes due to substitutional metal ions, oxygen vacancies, and donor impurities.shows the dependence of the ionization potential of the catalysts prepared at 600, 800, and 1050 °C in air, and 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2 on the heat treatment temperature, θ.The ionization potential of commercial rutile and anatase TiO2 is 5.8 eV.The ionization potential was the same (i.e., ca.5.8 eV) for the catalysts prepared at 600, 800, and 1050 °C in air, suggesting that the surface of the catalysts prepared in air had few localized electronic levels from lattice defects and impurities in the metal oxides, similar to commercial TiO2.On the other hand, the ionization potentials of the catalysts prepared under reductive atmosphere decreased with increasing temperature.The decrease in the ionization potential reflects an increase in the localized electronic levels.In other words, the valence changes due to substitutional metal ions, oxygen vacancies, and donor impurities increase with increasing temperature.) and the photon energy of the specimens heat-treated at 800 °C in air or in Ar containing 4% H2.(b) Dependence of the ionization potential of the catalysts prepared at 600, 800, and 1050 °C in air, and 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2 on the heat treatment temperature, θ. Oxygen Reduction Activity in Acidic Media Figure 8a shows the potential-iORR curves for the catalysts prepared at 600, 700, and 1050 °C in Ar containing 4% H2.The heat treatment temperature apparently affected the ORR activity.We focused on the ORR activity in the higher potential region.Figure 8b shows the potential-iORR curves for the catalysts prepared at 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2.All samples prepared in air had a low ORR current in the potential range above 0.6 V, indicating that these catalysts have low ORR activity.On the other hand, although the ORR current was low, the catalysts prepared under reductive atmosphere exhibited some ORR activity.In particular, the onset potential of the ORR for the catalyst prepared at 700 °C was approximately 1.0 V vs. RHE.This high onset potential indicates the good suitability of the active sites for the ORR.Therefore, high quality active sites were created by heat treatment at 700 °C under reductive atmosphere.Figure 9 shows the dependence of the iORR @ 0.7 V on the heat-treatment temperature for the samples prepared under reductive atmosphere.The iORR @ 0.7 V reached a maximum around 700 °C.The iORR presented in Figure 9 is based on the mass of the catalysts loaded on the GC rod.As shown in Figures 2 and S2, the surface area of the catalysts declined precipitously above 800 °C.Thus, the decrease in the iORR @ 0.7 V above 800 °C seems to be due to the decrease in the surface area.To evaluate the specific activity (i.e., the ORR current density based on surface area) the actual surface area of the oxides must be estimated.However, it is difficult to estimate the surface area of the oxides because neither hydrogen nor CO is adsorbed by the oxides.Therefore, the electrical charges of the double layer of the catalysts calculated from the cyclic voltammogram (CV) in N2 atmosphere were used to evaluate the apparent specific activity of the catalysts.Figure S3 shows the cyclic voltammograms of the GC rod only and of titanium-niobium oxide supported on the GC rod (TixNbyOz/GC) heat-treated at 800 °C under reductive atmosphere.Because the amount of oxide catalyst loaded on the rod was small (ca. 1 mg), the charge/discharge current was mainly derived from that due to the GC substrate.The electrical charge due to the oxide was estimated from the difference between the CV of TixNbyOz/GC and that of GC only.Figure S4 shows the dependence of the electrical charge of the double layer of the oxides on the catalyst loading.The SEM images showed that the surface area of the catalysts decreased above 800 °C due to aggregation of the particles.However, a linear relationship was obtained, suggesting that the electrical charge was determined not by the heat treatment temperature but by the catalyst loading.Therefore, the trend in the apparent specific activity (ORR current density based on electrical charge) is similar to that of the mass activity.It is anomalous that the electrical charge is independent of the heat treatment temperature.The surface area estimated using the electrical charge may be different from that predicted from the SEM images.Because the electrical conductivity of even the catalysts prepared under reductive atmosphere is low, the surface area of the electrochemical active region in contact with the GC rod might be small.Thus, a more accurate estimation of the actual surface area of the catalysts is necessary. Figure 9. Dependence of iORR @ 0.7 V on temperature used for heat-treatment of the samples under reductive atmosphere. Relationship between ORR Activity and Physico-Chemical Properties The ORR activity was enhanced by reductive heat treatment in the region of 700 to 900 °C.The XRD patterns indicated that the crystalline structure of the catalysts prepared under reductive atmosphere changed from anatase to rutile TiO2 around 800 °C.On the other hand, the XPS spectra revealed that low valence state Ti is generated by heat treatment above 700 °C under reductive atmosphere.Therefore, reduction of the sample surface occurs around 700 °C.The ionization potential is more sensitive to the surface state as shown in Figure 7b.Henrich et al. found that the work function (i.e., ionization potential in this study) of TiO2 decreased as the density of oxygen vacancies increased [18].Therefore, the low ionization potential suggested that the catalysts heat-treated under reductive atmosphere at higher temperature had more surface defects.In this study, the oxygen vacancies as well as the valence changes induced by substitutional metal ions were found to produce localized electronic energy levels in the bandgap. Figure 10 shows the relationship between the ionization potential and the iORR @ 0.7 V of the catalysts prepared under reductive atmosphere.A "volcano plot" with a maximum at 5.4 eV was obtained, suggesting that the electronic state of the sample surface is suitable for the ORR. Adsorption of oxygen molecules on the surface is required as the first step for the ORR to proceed.Many studies have demonstrated that surface defect sites are required for adsorption of oxygen molecules on the surface of the oxides [19].Therefore, a larger number of surface defects furnishes more sites for adsorption of oxygen molecules.In addition, the interaction of oxygen with the catalyst surface is essential because adsorption of oxygen and desorption of water from the surface are both necessary for robust progress of the ORR.When the interaction of oxygen with the catalyst surface is strong, desorption of water does not proceed readily.On the other hand, when the interaction of oxygen with the catalyst surface is weak, less adsorption of oxygen molecules occurs.Therefore, there is an optimal strength for the interaction between oxygen and the catalyst surface.Metallic Ti adsorbs oxygen strongly because of the large adsorption energy of oxygen (759 kJ•mol −1 ) and the strong energy of the Ti-oxygen bonds (calculated: 625 kJ•mol −1 ) [20].In the case of Pt, the energy for adsorption of oxygen and the calculated Pt-oxygen bond energy are 272 kJ•mol −1 and 385 kJ•mol −1 , respectively [20].Therefore, the corresponding values for Ti are much larger than those of Pt.As the degree of oxidization of metallic Ti increases, the interaction of oxygen with Ti on the catalyst surface is weakened because the oxide ions attract the electrons in the highest occupied molecular orbital of Ti thereby conferring a positive charge on Ti, i.e., higher valence state.Because the ionization potential is related to the strength of the interaction between the surface of the specimen and oxygen, the volcano plot shown in Figure 10 suggests that there is a suitable interaction between the surface of the specimen and oxygen.Consequently, the strength of the interaction between oxygen and the oxide surface could be manipulated by controlling the local energy level of the electrons, i.e., by controlling the valence changes induced by the substitutional ions and/or oxygen vacancies. Figure 10.Relationship between the ionization potential and the iORR @ 0.7 V of the catalysts prepared under reductive atmosphere. Experimental Section The high concentration sol-gel method [21,22] was used for preparation of the precursor.A 30 cm 3 aliquot of titanium(IV) tetraisopropoxide (C12H28O4Ti, 99.99%, Sigma-Aldrich Japan Co. LLC, Tokyo, Japan) and 4 cm 3 of niobium(V) ethoxide (C10H25NbO5, 99.95%, Aldrich) were dissolved in 200 cm 3 of 2-methoxyethanol with a TiO2:Nb2O5 weight ratio of 8:2.The mixed solution was maintained at −50 °C, and 15 cm 3 of 2-methoxyethanol in 15 cm 3 of pure water was added to the mixed solution dropwise.The temperature of the solution was raised to 80 °C and maintained for 3 weeks as an aging treatment, resulting in the formation of nano-sized complex oxides.The precipitates were dispersed in 2-methoxyethanol to obtain a dispersion of nano-sized titanium-niobium oxide. A 3-mm 3 aliquot of the dispersion was dropped onto a GC rod (φ = 5.0 mm; TOKAI CARBON CO., LTD., Tokyo, Japan) followed by drying at room temperature.The coated rod was heat-treated at 600 °C for 10 min in air as a pre-heat-treatment step to remove organic species and carbon residue and to provide sufficient electrical contact between the titanium-niobium oxide and the GC substrate.Subsequently, samples of titanium-niobium oxide supported on the GC rods were heat-treated at 600, 700, 800, 900, and 1050 °C in air or in Ar containing 4% H2 to prepare the working electrodes.For the powder XRD and ionization potential measurements, 3 cm 3 of the dispersion of nano-sized Ionization potential / eV titanium-niobium oxide was dried on a hot plate at 160 °C to obtain the powder samples.The powders were then heat-treated at 600 °C for 10 min in air to remove organic species and carbon residue.The powders were subsequently heat-treated at 600, 700, 800, 900, or 1050 °C in air or in Ar containing 4% H2 for powder XRD and ionization potential measurements.The morphologies, crystalline structures, and chemical states of the synthesized catalysts were investigated by transmission electron microscopy (TEM; JEOL Ltd., JEM-2100F, Akishima, Japan, X-ray diffraction (XRD; Rigaku Corporation, Ultima IV, X-ray source: Cu-Kα, Akishima, Japan) and X-ray photoelectron spectroscopy (XPS; ULVAC-PHI, Inc. Quantum-2000, X-ray source: monochromated Al-Kα radiation, Chigasaki, Japan).The peak of the C-C bond attributed to free carbon at 284.6 eV in the C 1s spectrum was used to compensate for surface charging. All electrochemical measurements were performed in 0.1 mol•dm −3 H2SO4 at 30 °C with a 3-electrode cell.A reversible hydrogen electrode (RHE) and a glassy carbon plate were used as the reference and counter electrodes, respectively.As a pre-treatment, 300 CV cycles were performed in O2 atmosphere in the range of 0.05 to 1.2 V with respect to the RHE at a scan rate of 150 mV•s −1 .Slow scan voltammetry was performed under O2 and N2 atmosphere in the range of 0.2 to 1.2 V with respect to RHE at a scan rate of 5 mV•s −1 .The ORR current density, iORR, based on the mass of the catalyst (mass activity), was determined by calculating the difference between the current density under O2 and N2 atmosphere. Conclusions In order to develop noble-metal-and carbon-free cathodes, titanium-niobium oxides were prepared for use as oxide-based cathodes and the factors affecting the ORR activity and active sites were evaluated.The high concentration sol-gel method was employed for preparation of the precursor.Secure adhesion between the oxide catalysts and the substrate was achieved by heating the precursor supported GC rod at 600 °C in air to maintain the electrical contact as a pretreatment step.To create ORR active sites, the precursor supported GC rod was heat-treated in the temperature range of 600 to 1050 °C in air or in Ar containing 4% H2.Heat treatment in reductive atmosphere at 700-900 °C was effective for conferring ORR activity to the catalysts.Notably, the onset potential for the ORR was approximately 1.0 V vs. RHE for the catalyst prepared at 700 °C.This high onset potential indicates the high quality of the active sites for the ORR.XRD, XPS and ionization potential measurements suggested that localized electronic energy levels were produced by heat treatment under reductive atmosphere.The electronic energy levels produced by the valence changes of Ti induced by substitutional metal ions and/or oxygen vacancies might govern adsorption of the oxygen molecules.Therefore, the strength of the interaction between oxygen and the oxide surface can be manipulated by controlling the valence changes induced by the substitutional ions and/or oxygen vacancies. Figure 5 . Figure 5. (a) Dependence of the ratios of Ti 3+ /Ti 4+ , STi(III)/STi(IV), calculated from areas of the XPS spectra of the specimens heat-treated under reductive atmosphere on the temperature.(b) Nb 3d XPS spectra of the catalysts prepared at 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2. Figure 6 . Figure 6.Dependence of the atomic ratio of Nb/Ti calculated from XPS spectra of the specimens prepared under reductive atmosphere on the heat treatment temperature. Figure7bshows the dependence of the ionization potential of the catalysts prepared at 600, 800, and 1050 °C in air, and 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2 on the heat treatment temperature, θ.The ionization potential of commercial rutile and anatase TiO2 is 5.8 eV.The ionization potential was the same (i.e., ca.5.8 eV) for the catalysts prepared at 600, 800, and 1050 °C in air, suggesting that the surface of the catalysts prepared in air had few localized electronic levels from lattice defects and impurities in the metal oxides, similar to commercial TiO2.On the other hand, the ionization potentials of the catalysts prepared under reductive atmosphere decreased with increasing temperature.The decrease in the ionization potential reflects an increase in the localized electronic levels.In other words, the valence changes due to substitutional metal ions, oxygen vacancies, and donor impurities increase with increasing temperature. Figure 7 . Figure 7. (a) Relationship between the square root of the photoelectric quantum yield (Y 1/2) and the photon energy of the specimens heat-treated at 800 °C in air or in Ar containing 4% H2.(b) Dependence of the ionization potential of the catalysts prepared at 600, 800, and 1050 °C in air, and 600, 700, 800, 900, and 1050 °C in Ar containing 4% H2 on the heat treatment temperature, θ.
2015-09-18T23:22:04.000Z
2015-07-17T00:00:00.000
{ "year": 2015, "sha1": "dc632a1e14b5ef2f25d5cfb7cc39034a25e1e08e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/5/3/1289/pdf?version=1437138337", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dc632a1e14b5ef2f25d5cfb7cc39034a25e1e08e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
11485066
pes2o/s2orc
v3-fos-license
Study of sequential semileptonic decays of b hadrons produced at the Tevatron We present a study of rates and kinematical properties of lepton pairs contained in central jets with transverse energy E_T>15 GeV that are produced at the Fermilab Tevatron collider. We compare the data to a QCD prediction based on the HERWIG and QQ Monte Carlo generator programs.We find that the data are poorly described by the simulation, in which sequential semileptonic decays of single b quarks (b -->l c X with c -->l s X) are the major source of such lepton pairs. I. INTRODUCTION This study of sequential semileptonic decays of b hadrons completes the review of the heavy flavor properties of jets produced at the Fermilab Tevatron collider presented in Ref. [1]. The data set, collected with the Collider Detector at Fermilab (CDF) in the 1992 − 1995 collider run, consists of events with two or more jets with transverse energy E T ≥ 15 GeV and pseudorapidity |η| ≤ 1.5, and is the same as that used in Ref. [1]. The heavy flavor purity of the sample is enriched by requiring that at least one of the jets contains a lepton (e or µ) with transverse momentum larger than 8 GeV/c. The jet containing the lepton is referred to as lepton-jet, whereas the jets recoiling against the lepton-jet are called away-jets. Since these events have been acquired by triggering on the presence of a lepton with p T ≥ 8 GeV/c, we call electron and muon data the samples with an electron-and muon-jet, respectively. Jets containing hadrons with heavy flavor are identified using the CDF silicon micro-vertex detector (SVX) to locate secondary vertices produced by the decay of b and c hadrons inside a jet. These vertices (secvtx tags) are separated from the primary vertex as a result of the long b and c lifetime. The b-and chadron contributions are separated by employing an additional tagging algorithm [2], which uses track impact parameters to select jets with a small probability of originating from the primary vertex of the event (jpb tags). Sequential semileptonic decays are identified by searching lepton-jets for the presence of additional soft leptons (e or µ with p T ≥ 2 GeV/c) that are referred to as slt tags. In Ref. [1], we have used measured rates of secvtx and jpb tags to determine the bottom and charmed content of this data sample; we have then tuned the parton-level cross sections predicted by the simulation, based upon the herwig [3] and qq [4] Monte Carlo generator programs, to match the heavy-flavor content of the data. Reference [1] shows that rates of lepton-and away-jets with secvtx and jpb tags, as well as the relevant kinematical properties of the data, can be modeled by tuning the simulation within the theoretical and experimental uncertainties. However, the number of away-jets with slt tags, which according to the simulation are mostly due to bb production, is found to be significantly larger than what predicted by the conventional-QCD simulation. The observed discrepancy is consistent with previously reported anomalies [5,6,7], and opens the possibility that approximately 30% of the presumed semileptonic decays of b-hadrons produced at the Tevatron is due to unconventional sources. Therefore, it is of interest to extend the earlier comparison to the yields of slt tags contained inside lepton-jets. The present analysis is based upon the same samples of data and simulated events used in Ref. [1], and makes use of the same tuning of the simulation. In Sec. II, we evaluate rates of lepton-jets containing also one soft lepton tag. In Sec. III, we compare the kinematics of these lepton pairs in the data and in the simulation. Section IV contains cross-checks and a discussion of systematic effects. Our conclusions are presented in Sec. V. II. LEPTON-JETS CONTAINING AN ADDITIONAL SOFT LEPTON We search lepton-jets for additional soft leptons (p T ≥ 2 GeV/c) using the slt algorithm [8,9,10,11]. Pairs of trigger and soft leptons arise from four different sources: sequential semileptonic decays of single b hadrons, leptonic decays of ψ mesons, semileptonic decays of two different hadrons with heavy flavor produced by gluons branching into pairs of b or c quarks, and hadrons that mimic the experimental signature of a lepton. We compare data and simulation for the following yields of tags: 1. Dil, the number of lepton-jets containing one and only one additional soft lepton. Since approximately 50% of the J/ψ mesons produced at the Tevatron do not originate from B decays [12] and are not modeled by the heavy flavor simulation, dileptons with opposite charge, same flavor, and invariant mass 2.6 ≤ m ee ≤ 3.6 GeV/c 2 and 2.9 ≤ m µµ ≤ 3.3 GeV/c 2 are removed from this study. 2. Dil SEC (Dil JP B ), the number of lepton-jets that also contain one soft lepton and a secvtx (jpb) tag due to heavy flavor. Lepton pairs consistent with J/ψ decays are also removed. The yields of lepton pairs consistent with J/ψ decays, which are removed from the present analysis, have been compared to the simulation in Table XIV of Ref. [1]. The comparison has been used to verify the b purity of the data. The observed numbers of jets containing a lepton pair are listed in Table I. Rates of dileptons produced by heavy flavor decays in the simulation (as yet unnormalized) are shown in Table II. In the simulation, dileptons are mostly produced by sequential decays of single b hadrons and have opposite sign charge (OS); approximately 5% of the lepton pairs have same sign charge (SS) and are found in jets produced by gluons branching into pairs of heavy quarks. In the data the ratio of SS to OS dileptons is appreciably higher (≃ 20%) than in the simulation. The excess of SS dileptons with respect to the simulation is attributed to hadrons that mimic the lepton signature. Therefore we use the number of SS dileptons with a 10% error to estimate and remove the fake-lepton contribution to OS dileptons. This rather intuitive method for estimating this background will be further discussed in Section IV. quarks in the simulation not yet tuned according to the fit performed in Ref. [1]; dir, f.exc, and GSP indicate the direct (LO), flavor excitation, and gluon splitting contributions predicted by herwig to the numbers of OS/SS lepton pairs with different and same flavor and to the numbers of OS-SS lepton pairs with any flavor. There is no contribution from c direct production. At generator level, we have verified that both the trigger and soft lepton tracks match an electron or muon originating from b-or c-hadron decays (including those coming from τ or ψ cascade decays). Lepton pairs consistent with J/ψ decays are removed as in the data. A. Rates of soft leptons due to heavy flavor in the data and in the normalized simulation Table III lists numbers of OS-SS lepton pairs due to heavy flavor in the data and in the simulation normalized according to the fit described in Sec. VIII of Ref. [1]. Table IV lists the contribution of the various production mechanisms to the numbers of OS-SS lepton pairs in the normalized simulation. Electron simulation In the data there are 1447 ± 65 lepton pairs in the same jet (the statistical error is ±43.8 and the systematic uncertainty of the fake-lepton removal is ±48.0). The simulation predicts 1180.7±128.7 dileptons (the systematic uncertainty due to the slt tagging efficiency is ±118 and the uncertainty due to the fit used to tune the simulation in Ref. [1] and to the simulation statistical error is ±51.5). In lepton-jets with jpb tags, we find 635.5±30.5 lepton pairs (the statistical error is ±27.8 and the systematic uncertainty due to the fake-lepton removal is ±12.6). The simulation predicts 530.9±39.7 lepton pairs (the systematic uncertainty due to the slt tagging efficiency is ±26.5 and the uncertainty due to the tuning of the simulation and to the simulation statistical error is ±29.5). The small excess of the data with respect to the simulation is approximately a 2 σ effect. In the next section, we study some kinematical properties of these lepton pairs. Dil JP B 93.9 ± 11.0 175.0 ± 23.6 0 111.4 ± 21.6 7.0 ± 3.4 Figures 1 to 3 compare distributions of invariant mass and opening angle of dileptons contained in the same jet in the data and in the simulation 1 . The small excess of lepton pairs with respect to the simulation prediction (see Table III) appears to be concentrated at invariant masses smaller than 2 GeV/c 2 and opening angles smaller than 0.2 rad. For dilepton invariant masses larger than 2 GeV/c 2 data and simulation are in reasonable agreement. The shapes of the transverse momentum distributions of the trigger and soft leptons, shown in Fig. 4, are compatible with that of the expectation. 1 The fake-lepton background is removed by subtracting the distribution of SS dileptons from that of OS dilepton, both in the data and in the simulation. In the data, errors include the ±10% systematic uncertainty of this removal. IV. SYSTEMATICS The excess of lepton pairs with respect to the simulation is all concentrated at dilepton opening angles smaller than 11 • . Since this is approximately the angle covered by a central calorimeter tower, we have investigated at length the possibility that the efficiency of the lepton selection criteria, described in Sec. IV of Ref. [1], is not simulated properly when two lepton-candidate tracks hit the same calorimeter tower. However, for opening angles smaller than 11 • , the excess with respect to the simulation when the leptons are contained in the same tower is smaller, but consistent with that observed when the leptons hit two neighboring towers. We have also inspected all distributions of the tracking and calorimeter informations that are used to select leptons. We have compared these distributions for lepton pairs with opening angle smaller and larger than 11 • without discovering any sensible difference. Therefore, we have investigated other possible causes; since they are of general interest, we present these studies in the following. In subsection A we verify the method used to estimate and remove the fake-lepton contribution. In subsection B we check the simulation of sequential b-decays that represents the largest contribution to lepton pairs. Finally, in subsection C we study a handful of events containing three leptons. A. Fake-lepton estimate The technique of removing the fake-lepton background by subtracting SS dileptons from OS dileptons has been used by CDF in several measurements of the Drell-Yan cross section [13]. We prefer this technique to the use of the standard parametrized probability of finding a fake lepton in a jet, derived using large samples of generic-jet data [1,10,11], because the latter method might not be applicable to jets that already contain a lepton (generic-jet data do not contain enough lepton pairs to construct a reliable parametrization of this fake probability). The simulated inclusive electron sample contains 955 ± 108 OS and 63 ± 9 SS dileptons produced by heavy-quark decays. In the data, there are 1450 OS and 339 SS dileptons. After removing the dilepton rates predicted by the simulation, we would like to explain in terms of fake-lepton background the remaining 495 ± 114 OS and 276 ± 20 SS dileptons 2 . 2 The standard simulation ignores b-hadron mixing and underestimates the rate of SS dileptons due to the In principle, rates of OS and SS dileptons due to misidentification background could be different. On average, jets contain the same number of positive and negative leptoncandidate tracks. Therefore, when searching for an additional soft lepton jets in which one track has been already identified as the trigger lepton, the number of OS candidates is larger than the number of SS candidates (these numbers will be approximately equal only for jets with a very large number of candidate tracks). We have investigated this scenario by using samples of generic jets (JET 20, JET 50, and JET 70) and their simulation described in Refs. [1,10,11]; the simulation has been also tuned to reproduce the rates of secvtx and jpb tags observed in the data. We select jets containing an slt tag, and for these jets we count the number of additional slt candidate tracks, N C , with opposite or same charge. We also count the number of OS and SS additional soft lepton tags, Dil, found in these jets. These rates are listed in Table V. As expected, the table shows a large difference between the number N C of OS and SS candidates. However, in generic jets, which are not rich in heavy flavor, the rate of OS and SS soft lepton pairs inside the same jet is approximately equal (to within 13%). After removing the heavy flavor contribution predicted by the simulation, we derive P f k , the probability of converting slt candidate tracks into a fake slt tag in jets that already contain an slt tag (see Table V). We note that in generic-jet data the probability P f k for OS candidates is 65% of that for SS candidates. Our standard estimate of the number of OS dileptons due to misidentification background assumes that it is equal to the number of observed SS dileptons minus the number of predicted SS dileptons due to heavy flavor [339 − (63 ± 9) = 276 ± 20]. We use two additional methods to verify this estimate. In the inclusive electron sample, e-jets contain N c (OS) = 54938 and N c (SS) = 34744 candidate tracks, respectively. In the first method, we multiply the numbers of candidates by the corresponding probabilities P f k derived in generic-jet data. We predict a slightly smaller OS background (236 ± 38) and an amount of SS background (229 ± 14) in agreement with the estimate that includes b-hadron mixing 2 . decays of two different b hadrons. This is a small effect; when using the time-integrated mixing parameter χ = 0.118 [14], the simulation predicts 915 ± 105 OS and 103 ± 15 SS dileptons produced by heavyquark decays. Therefore, the difference between data and simulation, which should be attributed to the fake-lepton background, becomes 535 ± 111 OS and 236 ± 24 SS dileptons. An additional estimate of the OS background can be derived by applying the standard parametrization [1,10,11] of the fake slt probability to all OS candidate tracks. This method yields a slightly higher background estimate of 302 ± 30 OS fake dileptons 3 . We take the ±10% discrepancy between the different background estimates as a measure of its uncertainty. We note that the simulation of the slt algorithm relies on parametrizations based on the data and does not provide a good understanding of why, in generic-jet data, the probability P f k is smaller for OS candidates than for SS candidates. However, imposing the conditions that the excess of 535 ± 111 OS pairs with respect to the simulation prediction 2 is due to fake-lepton background and that the probability P f k does not depend on the pair sign produces a paradoxical result. In the inclusive electron sample, all 339 SS pairs are attributed to fake background whereas the simulation predicts 103 ± 15 SS pairs due to heavy flavor; in contrast, generic jets require the presence of 73 ± 25 SS pairs due to heavy flavor, not predicted by the simulation and of the same size of the predicted number of OS pairs due to heavy flavor (63 ± 24 in Table V). We use the generic-jet data to also verify that OS and SS dileptons due to misidentification background have similar invariant mass distributions. For this purpose, we use two data sets, the jets of which have an average transverse energy comparable to that of jets in the inclusive lepton samples. The first data set is selected requiring the presence of at least one jet with transverse energy larger than 20 GeV (JET 20). The second data set is selected requiring the presence of at least four jets with transverse energy larger than 15 GeV and total transverse energy larger than 125 GeV ( E T 125 4CL). In order to emulate the inclusive lepton sample requirement of one lepton with transverse momentum larger than 8 GeV/c, we select jets with at least one track with p T ≥ 8 GeV/c inside a cone of radius 0.4 around their axis. We then search these jets for soft lepton tags. Figure 5 shows that the invariant mass distributions of the high p T track and the soft lepton track for OS and SS combinations are indeed quite similar. The difference between data and simulation prediction, Dil(f k), is attributed to fake dileptons. Dil(h.f.))/N C is the probability that a track produces a fake soft lepton tag. B. Simulation of sequential b-decays As shown in Table IV, most lepton pairs contained in the same jet are produced by sequential decays of single b-hadrons. Sequential decays of b hadrons are modeled with the CLEO Monte Carlo generator (qq) [4]. The discrepancy in the shape of the invariant mass and opening angle distributions in the data and in the simulation could be due to an inaccurate modeling of these decays. We cannot verify this by using any of our data, and we do it through a comparison of the process e + e − → bb at the Z-pole using the jetset 7.3 Monte Carlo generator [16] as implemented at LEP by the DELPHI collaboration [17] and our simulation which uses the herwig and qq generators. At our request, the DELPHI collaboration has compared dilepton invariant mass distributions in hadronic Z-decays to their simulation using selection criteria that could be easily reproduced in the CDF detector [18]. The C. Events with three leptons Events in which the lepton-jet contains an additional slt tag and the away-jet also contains a soft lepton tag are the most interesting because the rate of a-jets with slt tags is also larger than the prediction [1]. Unfortunately, there is only a handful of these events. As shown in Fig. 2 V. CONCLUSIONS We have studied rates and kinematical properties of sequential semileptonic decays of single b hadrons produced at the Fermilab Tevatron collider. This study completes the review of the heavy flavor properties of jets produced at the Tevatron reported in Ref. [1]. As in the previous analysis, we use events with two or more central jets with E T ≥ 15 GeV, one of which (lepton-jet) is consistent with a semileptonic bottom or charmed decay to a lepton with p T ≥ 8 GeV/c. In the previous study, we have used measured rates of lepton-and away-jets containing displaced vertices (secvtx tags) or tracks with large impact parameter (jpb tags) to determine the bottom and charmed content of the data; we have then tuned the parton- shows that there is at least a difficulty in modeling rates and kinematical properties of such lepton pairs.
2017-09-30T12:20:20.627Z
2005-07-01T00:00:00.000
{ "year": 2005, "sha1": "38de94003d8652b84070f01594d2ae63b49edced", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/0507043", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3b83a09919efc791159f31afa57cdc3e6e48e18a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270090287
pes2o/s2orc
v3-fos-license
Hourglass-like constrictions of the radial nerve in the neuralgic amyotrophy: A case report Neuralgic amyotrophy (NA) is a peripheral nerve disorder that has a classical presentation as motor deficit after severe pain, but it is still overlooked or misdiagnosed. Formerly, the diagnosis was based on the clinical picture and electrophysiology; however, sophisticated imaging and surgical modalities showed structural abnormalities such as hourglass-like constrictions of the nerves. In this article, we present a case presenting with drop hand mimicking radial nerve entrapment. The patient was diagnosed with NA and surgery revealed hourglass-like constrictions. The clinical findings were improved after neurorrhaphy and physical therapy. In conclusion, hourglass-like constrictions can be prognostic factors of NA and should be searched carefully. Neuralgic amyotrophy (NA), also known as Parsonage-Turner syndrome or brachial plexus neuritis, typically presents with sudden-onset upper extremity pain, followed by multifocal paresis and muscle atrophy. [1]Its incidence was previously estimated as 2 to 3/100,000 per year; however, it is estimated as 1/1,000 currently. [2]urglass-like constrictions (HLCs) are characterized by narrowing in the nerve fascicles which were first seen in the mononeuritis many years ago; [3,4] however, HLCs have been reported in NA cases increasingly day by day.These constrictions have changed the treatment modality from conservative way (i.e., corticosteroids) to surgery (i.e., neurolysis, neurorrhaphy, or even neuron grafting). [1] this article, we report a case of NA presenting with drop hand which was misdiagnosed as radial nerve entrapment. CASE REPORT A 35-year-old male airline pilot with an unremarkable medical history complained of difficulty in the extension of his left hand and fingers shortly after sudden pain in his arm.He was diagnosed with radial nerve entrapment neuropathy, and non-steroidal anti-inf lammatory drugs were initiated in the acute period to alleviate the pain.He, then, attended to a physical therapy and rehabilitation program.However, his complaint persisted and he was referred to decompression surgery.The first decompression surgery on the radial nerve at the spiral groove was performed six weeks after the symptoms began.Two months after the initial surgery, he was admitted to our center.We considered that the localization was wrong, as the complaints of the patient and electroneuromyography (EMG) results were incompatible.In our center, EMG showed a total lesion of the radial nerve proximal to the elbow and distal to the spiral groove, in which the sensory branch was spared.He was scheduled for surgery for exploration, and the second surgery was performed on a place between the spiral groove and proximal elbow level.During surgery, three severe constrictions were revealed in the main trunk of the radial nerve a few centimeters proximal to the lateral epicondyle (Figures 1 and 2).The constricted portion was resected, and neurorrhaphy was performed.In pathological examination, serial sections showed small proliferating nerve fascicles surrounded by collagen.Ten months later, poor recovery was achieved, and repeated EMG showed regeneration and reinnervation potentials in radial nerve-innervated forearm muscles, including the brachioradialis.The patient was followed with the diagnosis of NA, instead of radial neuropathy.After two years, his complaints were completely resolved and he returned to his job. DISCUSSION Neuralgic amyotrophy has been recognized better than earlier, and the diagnosis of NA has increased with the contribution of improved radiological techniques and surgical methods. [1]Despite this, the exact pathophysiological mechanism of NA has not been elucidated, yet.Autoimmune processes have been mostly blamed, due to inflammation in the selected nerves. [3,5]cently, HLCs, one of the findings of mononeuropathies such as anterior or posterior interosseous nerve syndromes, have been reported in cases of NA. [5,6] The association of HLCs with NA and its prognostic value have been proven, and the surgical approach can be an alternative treatment method for NA patients with HLCs. [1]Nagano et al. [4] claimed that swelling and adhesion of nerve fascicles could develop secondary to an inflammatory process.They also advocated that limb movement and mechanical traumas could play a role in forming HLCs.Swelled nerve portion can be less flexible, and repeated limb motion can induce kinking, torsion and, consequently constrictions. [1,4]The present case was admitted to our clinic before the novel coronavirus disease 2019 (COVID-19) pandemic, and there was no infection or vaccination in his history. Previously, only corticosteroids and physical therapies were used in the treatment; however, a few studies in the literature have shown the benefit of neurolysis, neurorrhaphy or nerve grafting for the cases of NA, particularly for those with HLCs. [3]Although the optimal time for surgery is not clear, it is considered that the nerve can recover spontaneously within three months. [1,3]When clinical or electrophysiological findings are not improved within the first three months, the patients with NA should undergo magnetic resonance neurography (MRN) or high-resolution ultrasonography (US), and they should referred to surgery, if HLCs are seen in radiological imaging. [1,5]though NA was considered to be a self-limiting condition formerly, recent studies have shown that spontaneous prognosis of NA is not favorable.The majority of patients have residual pain or paresis in the course of months to years, even if they receive conservative treatment. [1,5,6]On the other hand, studies have demonstrated that surgical interventions in NA are significantly beneficial. [3]Our patient recovered completely after a successful neurorrhaphy and physiotherapy; therefore, early surgical intervention should be performed to avoid irreversible damage.Our patient did not undergo MRN or US; however, undoubtedly radiological imaging has a vital role to show HLCs and rule out possible pathologies such as a tumor.That is why cooperation among surgeon, radiologists, and neurologists is of utmost importance in cases of NA.Also, a well-planned physical therapy program by the physiatrist should be a part of the NA treatment. In conclusion, the diagnosis and treatment of NA can be challenging, even if the patient has a typical clinical picture.Neuralgic amyotrophy should be identified as possible as early and surgical intervention is necessary for full recovery, if NA presents with HLCs. Figure 1 . Figure 1.Hourglass-like constrictions of the main trunk of the radial nerve were treated with neurorrhaphy. Figure 2 . Figure 2. Three hourglass-like constrictions on the main trunk of the radial nerve are seen.
2024-05-29T15:17:16.218Z
2023-05-05T00:00:00.000
{ "year": 2023, "sha1": "5cefb385655f3ceb0ce17ec0612c96caa8864dd1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "e94d2a3628eeb52e33633ddd3adb1fdb9a9aceb4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
157103621
pes2o/s2orc
v3-fos-license
ICT and Employment in India: A Sectoral Level Analysis How technology affects growth or employment has long been debated. With a hiatus, the debate revived once again in the form of how Information and Communications Technology, as a form of new technology, exerts on productivity and employment. Information and Communications Technology perceived as General Purpose Technology like steam engine or electricity in the past, ushered the world into a new techno-economic paradigm, given its deep social, economic and cultural implications. For instance, within economic implication, it is hard to imagine an economic activity that does not it, directly or indirectly. Eventually, Information and Communications Technology intensity, measure as the ratio of Information and Communications Technology investment to total investment, increased phenomenally in industries across sectors. Introduction The relationship between technology and employment is complex and controversial. In the last decade, Information and Communications Technology (ICT), in the form of new technology, has emerged in a big way the world over including India. It is believed that after the technological waves of steam engine in the 19 th century or and electricity in the 20 th century, ICT is the only technology being regarded as general purpose technology (GPT). With the growth rate of over 25 percent since 2006, ICT producing sector has grown in size and in employment generation, particularly for skilled workers. Till the early 1980s, countries particularly belonging to the OECD group were apprehensive while using ICTs. There was an apprehension that the impact of the relatively higher investment made in ICT did not percolate into higher growths of employment and productivity a notion later known as productivity paradox-that computers are seen everywhere except in productivity statistics-a concept introduced and popularized by Solow . The perception about the impact of ICT on employment changed over time. In the 1990s empirical studies mostly based on the case studies of the U.S. and some countries of the EU concluded and established the positive and significant impact of ICT on productivity and employment. This, however, gave rise to yet another debate, why productivity (TFP) or employment growth rose faster in the U.S. than in the EU. The debate got somewhat resolved towards the mid-2000s, with a general consensus emerged that it is the ICT producing industries (particularly the services sector) that triggered productivity and employment growth in the U.S. post-1995; something found missing in the EU (Vivarelli, 2011). In other words, compared to the EU countries did not seem to have exploited the productivity enhancing potential of the ICT-producing industries to the extent possible. The results marked a major departure from results of the earlier studies that doubted the potential of the ICT led productivity growth of the services sectors (Gordon, 2000). Three channels are identified through which ICT influences growth; surge in ICT investment, strong productivity effects from ICT-producing industries and spillover impacts in the ICT using sectors of the economy (Vivarelli and Pianta, 2000). Though the ICT spillovers are typically difficult to measure at industry level but at firm-level they are found to be present. In India, the success story of ICT growth began gradually since the early 2000s, it accelerated after 2005. It has become a major source of foreign exchange earning through foreign investment (FDI and FIIs) and export of ITES (IT enabled services). According to NASSCOM (2012), in 2008-09, the sector grew by 14 percent to reach $ 71.7 billion in aggregate revenue (including hardware). Of this, the software and services segment accounted for a major chunk ($ 59.6 billion). In the same year, the total ICT revenue reached 5.8 percent of GDP compared to 1.2 percent in 1997-98. Despite some slowdown, the Indian IT sector has successfully weathered the global financial crisis of 2008-09. Not much research has been done on association between ICT, productivity and employment as far as the developing countries (including India) are concerned (Freeman andVivarelli, 2011). Often, success stories of the newly industrialized countries (NICs) like, South Korea, Philippines, Indonesia, Malaysia, etc. are cited as examples when it comes to evaluate or measure the contribution of ICT in growth or employment (OECD, 2010). In India, reports of NASSCOM and Planning Commission, or some other anecdotal studies though exist, but they deal within the general framework of socio-economic implications of ICT such as ICT impact on environment, on women empowerment, on rural development, on skill, and so on. In the paper, an attempt is made to evaluate the rise of ICT intensity across industries and the corresponding employment growth in India. Towards this, industries, belonging to the organized sector in India are divided into three groups, ICT producing sectors (ICTPS), ICT using sectors (ICTUS) and non-ICT using sectors (NICTUS), with their additional sub-division into manufacturing and services sectors respectively. The paper is structured as follows. It starts with the definition of ICT and ICT intensity, followed with data source and research methodology. Thereafter, an analysis of the behavior of output growth, employment growth, employment elasticity and ICT intensity is made for all the three groups of industries. Review of Literature After a lot of debate and discussion, ICT was defined as: The production (goods and services) of a candidate industry that must primarily be intended to fulfill (or enable) the function of information processing and communication by electronic means, including transmission and display (OECD, 2010).Given the pervasive nature of ICT, it is considered as GPT or as a form of new technology. Theoretically, there is no direct (or well defined) way to know the impact of technology employment. The contradictory nature of results on how technology affects employment arise due to different assumptions made about the output growth rate, demand, different level of aggregation (sector, industry or firm) and the way it's indirect impacts is treated, i.e. whether these effects are included or not in the whole analysis (Bhalla, 1997 andKumar, 2005). More so, it also depends how technology is defined, disembodied or embodied. The former is proxied by MFP, and the latter is embodied in factors of production, labour or capital. For example, in neo classical economics, technical change is measured by MFP. Empirically, the views on how technology affects employment are broadly categorized as optimistic and pessimistic. In the former based on the compensation theory, it is believed that any technical change with the implicit presence of various compensation mechanisms always results in positive employment impact, at least, in the long run (Vivarelli, 2011). It is evident from the results in many OECD countries, particularly the U.S during 1960-2000 that technological change is accompanied by increased employment growth, with enhanced MFP (Multi-factor productivity) growth (Stiroh, 2002 andVivarelli, 2011). The pessimistic view, on the other hand, though, in principle, believe in the working of compensation theory but rule out the possibility of the complete counter balance of the labour saving (or negative employment) impact of technical change. The great economist Wassily Leontief worried that the pace of modern technological change is so rapid that many workers, unable to adjust, will simply become obsolete, like horses after the rise of the automobile (Rogoff Kenneth, WEF, 2012). In case of the ICT, there is a growing apprehension that it has weakened the positive correlation between growth, productivity and employment something that has been one of the main characteristics of the post-World War II period known as 'golden age' (Rifkin, 1995). Many countries, developed and developing, have been experiencing structural unemployment (or this weakening relationship) originating from ICT (Vivarelli and Pianta, 2000). In last two decades, ICT emerged in a big way the world over including India. Accepted as a general purpose technology (GPT), ICT led the world economy ushered into a new technoeconomic paradigm. However, its influence on productivity or employment varies across firms, industries and countries. The pioneering hypothesis, in this regard, was made by Freeman, Clark and Soete (1982) that ICT is good not only for productivity but for employment as well. Adding further, it concluded that structural unemployment in E.U. during the 1970s was not due to ICT but due to socio-institutional rigidities. OECD (2010) defined many ways through which ICT affects employment growth, classified as direct and indirect. Harrison, et. al. (2006) investigated the effect of ICT on employment growth in a number of OECD counties; they concluded that during 1998-00, ICT resulted in direct and positive employment impact, and it also resulted in positive indirect employment impact following compensation mechanism. However, as far as the indirect employment impact of ICT is concerned it is found that it will result into a loss of nearly 5-10 million jobs annually the world over (Rogoff Kenneth, WEF, 2012). Surprisingly, labour market is able to absorb these losses in some other sectors. Further, in the empirical studies, on the basis of the way technology is used a distinction is also made between product and process innovations. For instance, in the latter, when the new product is used as capital in the production process of other goods or services, then technology is regarded as a job killer. In the former, new or improved product is used as final product in consumption, and then certainly it will be a job creator, if not met with demand deficiency (Schmidt, 1983;Pissarides and Vallanti, 2003). When measuring the employment impact of ICT, ICT, as new technology, is used as a product innovation in the ICT producing sectors and as a process innovation in the ICT using sectors (OECD, 2010). In the former, its employment impact is immediate, direct and positive, i.e., more is the output growth more is the employment assuming no demand deficiency; and in the latter it is indirect, positive (or negative) and often arises in long run. The net employment impact is therefore positive or negative depends on the relative sizes of these two effects. Empirical studies conducted on many OECD countries, particularly the U.S. and the E.U. concluded that in the former the ICT producing sector is stronger both in terms of MFP and employment growth. In India, contribution of ICT in GDP and employment is well documented (NASSCOM, 2012). Nevertheless, there are apprehensions that it has resulted in negative employment in sectors using it. In other words, it is believed that problem of unemployment got further accentuated with rise in ICT use. Results at the aggregate level may at times be misleading; it can obscure the reality at the disaggregated level (i.e. at industry or at the firm level). The study is therefore conducted at industrial level as well. Towards this, total number of industries is classified into the ICTproducing (ICTPS) and the ICT using industries (ICTUS) and non ICT using industries; categorized further into manufacturing and services units. The ICTPS includes producers of IT hardware, communication equipment, telecommunications and computer services (including software). The distinction between ICTUS and NICTUS was made on the basis of the level of ICT intensity. Practically, no industries can however be classified as Non-ICT industry, every industry uses some bit of ICT directly or indirectly. Theoretically, an industry is defined as non ICT if the ICT intensity is lesser than one third of the national average. When measuring the impact of ICT on employment, the concept of employment elasticity is used to find the extent of labour demand for every unit of good produced. Employment elasticity (EE) is defined as the ratio of employment and output growth rates. It measures the employment content of the additional unit of output produced. In other words, EE is defined by the formula, Where, L stands for employment while Y denotes value of output. The numerator is interpreted as the percentage change of employment, while the denominator refers to the percentage change in output. The EE shows the degree of responsiveness of employment to a percentage change in GDP. It is used to understand the labour market dynamics, and is therefore used in labour policy formulation. Industries The idea of classification of industries into ICT Producing, ICT Using and Non ICT Using Industries is derived from Stiroh (2002). He used the relative size of the contribution of these groups in order to provide a plausible explanation for the productivity difference in the U.S. and the Europe. Further, Van ark, et. al. (2002) extended the same definition to 16 OECD countries and 49 industries. The hypothesis underlying the geographical extension of the original classification introduced by Stiroh (2002) is that in each industry ICT would face the same pattern of adoption across countries, considering the U.S. as the optimal pattern. With the original 118 industries in total, 13 were under ICT producing industries, 49 under ICT using and the remaining 66 under non ICT using category. In India, because of the data constraints, only 89 industries are considered; of which 9 are under the ICT producing sector, 39 in the ICT using sector and the remaining 41 under non ICT using category. Like the original classification, three categories are sub-classified into in the secondary and service sector; this is follows: 6 and 3; 19 and 20; and 31 and 10 respectively in ICTPS, ICTUS and NICTUS. Intensity in India: a Group Level Analysis In this section an attempt is made to analyze the behavior of output growth rate, employment growth rate, employment elasticity (EE) and ICT intensity of the groups identified as ICTPS, ICTUS and NICTUS. The total period (2000-10) is divided into two sub-periods Period I (2000Period I ( -2005 and Period II (2005-10). The analysis is made first at the aggregated and then at the disaggregated level i.e. at groups level. Employment and Output growth rates are given as compound annual growth rates (CAGR). Table 1, the Indian economy at the aggregated level registered an impressive output growth rate of 12 percent since 2000, with 10 percent and 13 percent in Period I and Period II respectively. Similarly, the employment growth rate is found to be 3 percent at the aggregated level with respectively 2.5 and 4 percent in Period I and Period II. Eventually, EE increased from 0.25 to 0.29. What led this to happen, and to what extent is this attributed to the increase to new technology (or ICT intensity), which has gone up remarkably from 2 percent in Period I to 8 percent in Period II. In order to capture comprehensive picture, the study is extended to the di-aggregated level. ICT Producing Group (ICTPS): Among all, the ICTPS group posted the highest growth rates of employment and output in both periods. As shown in Table1, the former, accelerated from 9.5 percent in Period I to13.4 percent in Period II, and the latter also remained at double digit level 30 percent and 17 percent respectively in Period I and Period II. It led EE increased from 0.31 in Period I to 0.75 in Period II. This means, if output has doubled it resulted in labour demand by 31 percent in Period I but by 75 percent in Period II. The group, ICTPS, also recorded the maximum increase in ICT intensity from 2.2 percent in Period I to 11.6 percent in Period II. In India, in both sub-groups of ICTPS, i.e., in secondary and services sectors, rising ICT intensity is followed by increased employment growth. For instance, the former recorded output growth rate 16 percent in Period II up from 9 percent in Period I; and employment growth rate 5 percent from 2 percent. Consequently, EE rose from 0.2 to 0.31. This sub-group recorded increased ICT intensity from 3 percent in Period I to 6 percent in Period II. Further, as shown in Table 2, all six industries except industry group 323 (Manufacture of TV and radio receivers and recording or reproducing apparatus) posted accelerated growth rates of output and employment in Period II. The EE within the secondary sub-group, however, shows a mixed trend. The services sub-group of ICTPS has witnessed the highest growth rates in output and employment and ICT intensity. For instance, as shown in Table 3, output and employment growth rate are estimated to be 40 percent and 35 percent respectively in Period I though they declined to 19 percent and 22 percent in Period II, but these are still highest among all groups and sub-groups. Though EE declined marginally in Period II but it remained around one. Further, as evident in Table 3, the trend of impressive growth rate is recorded by all its constituent industries, 642 (Telecommunications), 722 and 723 (Software Consultancy and Data Processing). The results outlined above in case of the ICTPS group hence indicate that the positive relationship between ICT and employment. This in a sector in which new technology is used as a product, ICT product innovation; thus satisfying the compensation mechanism through new products that new technology always brings new employment opportunities in the sector producing it if there is no demand deficiency (Vivarelli, 2011). ICT Using Sectors (ICTUS) The ICTUS group consists of 39 industries in total, 19 in secondary and the rest 20 in services sector. The group has ICT intensity more than non ICT group but less than the ICT producing group, and can therefore be termed as an intermediate group. In the group, a comparative analysis of growth rates of employment and output show that it recorded these rates more than in the NICTUS but less than the ICTPS. EE is recorded to have declined in Period II. The important question is what led this to happen? At the dis-aggregated level, the secondary sub-group of ICTUS witnessed more or less the same trend as the group at a whole be it the output or the employment growth rate, and so the EE. EE declined sharply from 0.36 to 0.23 as evident in Table 1. ICT intensity is found to have increased from 2 percent in Period I to 3.3 percent in Period II. Further, the analysis at the industrial level within the sub-group shows the same pattern of growths in output and employment as the results at the aggregated level of the sub-group. In other words, be it publishing (221), re-production of recorded media (223) or transport equipment (359) (Pianta, 2005 andBrynjolfson, 2004). Non ICT Using Sector (NICTUS) The NICTUS group, which by definition has the least ICT intensity, includes 40 industries in total (30 from secondary and 10 from services). As shown in Table 1, both employment and output rates accelerated but EE declined marginally from 0.5 in Period I to 0.4 in Period II. ICT intensity has increased, though from a lower base. Further, as evident in Table 01 the secondary sub-group of NICTUS witnessed employment and output growth rates changing from 4.14 percent and 7 percent respectively in Period I to 4.8 percent and 8 percent respectively in Period II. Given this, EE declined from 0.60 to 0.56. ICT intensity is recorded to have gone up. But, as evident in Table 6, industries in the secondary sector show a mixed trend of EE. Similarly, the services sub-group has also recorded a marginal decline in EE in Period II. Contrary to this, industries in the sub-group have portrayed a mixed trend of increase in EE. Interestingly, all industries have experienced increased ICT intensity in Period II as shown in Table 07. From the above description, it is clear that ICT intensity has some relationship with employment, but at this stage it is difficult to comment on the nature of this relationship, i.e., what is the net employment impact of ICT intensity at the sectoral level, group level and at the aggregated level. In the next section, an attempt is made to asses this. Conclusion Since 2000, development and diffusion of ICT is on the rise the world over including India. ICT, accepted as GPT (General Purpose Technology) has led the Indian economy into a new techno-economic paradigm. The study conducted empirically in India gives many important results on the relationship between new technology and employment. Starting with the ICTPS group, with ICT intensity employment growth has also gone up since 2000; something found true for both secondary and services sub-groups also. Though, empirically it is beyond the scope of this paper, the extent of employment gain or loss due to ICT intensity, the theory of compensation mechanism help us to explain the results found. First, as mentioned above, the compensation mechanism through new products such as semi-conductor, computer hardware and telecommunication devices always result in positive employment impact. Second, in India, if ICT is used as product innovation, it leads to positive employment impact. Third, compensation via decline in prices of ICT product is also found strong in India. In other words, as per Moore's Law, the power of semi-conductor gets doubled every 20 months while its price halves, give a major boost in the demand ICT and related product and, therefore, more employment. Fourth, compensation via increase in investment also helped in increased employment growth. For instance, ICT sector has recorded comparatively high investment growth, contributed significantly by FDI inflow. For instance, over 600 Multinational companies (MNCs) are known to be sourcing their product development and engineering services from their centers in India (GOI, 2007-08). Like ICTPS, employment growth in the ICTUS group in India has gained positively from new technology, but the impact is found significant only in Period II. A significant share in total output and employment comes from the ICT using services sector. In the group, ICT is used as process innovations, which has negative employment impact through replacement of highly labour intensive electromechanical work with increasingly integrated components produced by automation produced in other manufacturing component. Conversely, in the services sub-group has witnessed a positive employment of ICT intensity. It may be due to all the other compensations mechanisms working in its favor. Finally, in the NICTUS group, in secondary and services sectors, in India, impact of ICT on employment may not be very strong, because of low ICT intensity base. Source: Own Computation using the Proves data source compiled by CMIE
2019-03-29T11:46:02.716Z
2017-04-05T00:00:00.000
{ "year": 2017, "sha1": "a8335770eb0db69af1061441052d61fe3dcb46e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a8335770eb0db69af1061441052d61fe3dcb46e5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
256648406
pes2o/s2orc
v3-fos-license
Review on Performance of Asphalt and Asphalt Mixture with Waste Cooking Oil To make full use of the regenerative value of waste cooking oil, and to solve the environmental pollution and food security issues caused by waste cooking oil, waste cooking oil was suggested for use in asphalt. Waste cooking oil was used to adjust the performance of virgin and aged asphalt. This review article summarizes research progress on the performance of asphalt and asphalt mixture with waste cooking oil. The results showed that a moderate dosage of waste cooking oil will improved the low-temperature performance and construction workability of petroleum asphalt and aged asphalt. The mixing and compaction temperature of asphalt mixture with waste cooking oil are reduced by up to 15 °C. The rutting resistance and fatigue resistance of modified asphalt and modified asphalt mixture with waste cooking oil are damaged. After the addition of waste cooking oil in aged asphalt, the high-temperature performance and shear rheologic property of aged asphalt will be recovered. The regeneration effect of waste cooking oil on aged asphalt and aged asphalt mixture is close to that of a traditional regeneration agent, and the partial performance of asphalt or asphalt mixture with waste cooking oil is better. There is no chemical reaction between waste cooking oil and asphalt, but the asphalt component and absorption peak intensity of partial functional groups are changed. The light components content of asphalt binder is usually increased. Further research regarding the engineering application of asphalt mixture with waste cooking oil should be conducted. The method for improving the performance of asphalt and asphalt mixture with waste cooking oil will be mainly researched. Introduction The waste cooking oil is generated from the cooking and food production process. The waste edible vegetable oil and waste edible animal oil are classified as waste cooking oil. The waste cooking oil is mainly included the waste oil after repeatedly fried food, recycled waste oil from catering trade, gutter oil and condensate oil in smoke lampblack machine. According to the statistical results from the China National Grain & Oils Information Center, the human consumption of edible oil was 3545.0 million tonnes in 2020 in China, and an average of 500~700 million tonnes of waste cooking oil is produced every year. There are a lot of pathogenetic components and carcinogenic components in waste cooking oil, and the dosage of heavy metal, aflatoxin and benzopyrene exceeds the limit of standard scope. Once the waste cooking oil is flowed back to people's dining table, the food security and human health will be seriously threatened [1]. Currently, the waste cooking oil is usually in the form of commercial grease, stearic acid, industrial oleic acid, chemical raw material, biodiesel, release agent, modifier and regenerating agent [2,3]. The waste cooking oil can be used in road engineering, and it is not decolored and deacidified. The waste cooking oil can also reduce the viscosity of a binder, and it may not produce secondary pollution [4]. The main component of waste cooking oil is fatty acid, which belongs to aromatic oil, and it is similar to the aromatic fraction in asphalt components. So, the waste cooking oil can be used to soften asphalt, restore the aged asphalt performance, and regulate the asphalt components and rheological property [5][6][7]. The compatibility between asphalt and modifier is improved by the addition of waste cooking oil, and the partial performance of asphalt with waste cooking oil can be adjusted. The petroleum-based rejuvenators are widely used to restore the aged asphalt performance, but there are still many technical problems [8,9]: (1) the light components of petroleum-based rejuvenators are easily volatilized at high temperature, so the regeneration efficiency of petroleumbased rejuvenators is relatively poor; (2) the content of aromatics and unsaturated bonds of petroleum-based rejuvenators is relatively high; thus, rejuvenators are oxidized easily at high temperature. The aging resistance and durability of petroleum-based rejuvenators are reduced. Meanwhile, the petroleum-based rejuvenators are originated from petroleum, which consumes non-renewable resources; thus, it cannot meet the direction of sustainable development. The waste cooking oil is used as an asphalt-regenerating agent, which can realize the recycling of waste asphalt pavement materials and waste cooking oil. The partial performance of recycled asphalt with waste cooking oil is better than that with other waste oil. The asphalt regenerant prepared by waste cooking oil, waste bio oil and waste engine oil is added into 70# aged asphalt, when the dosage of asphalt regenerant is less than 13%, the penetration and softening point of recycled asphalt with waste cooking oil is higher than that of other recycled asphalt, but the ductility of recycled asphalt with waste cooking oil is the worst [10]. The different regenerant with the best dosage is prepared rejuvenated asphalt, the colloidal stability of recycled asphalt with waste cooking oil is inferior to waste bio oil and waste engine oil, and the rejuvenated asphalt with waste cooking oil is drastically aged [11]. The improvement effect of waste engine oil on the low-temperature performance of aged asphalt is less than that of waste vegetable oil. When the dosage of different regenerant is the same in the rejuvenated asphalt mixture, the strength of the rejuvenated asphalt mixture with waste engine oil is higher, but the durability of the rejuvenated asphalt mixture with waste cooking oil is much better [12]. However, the recoverability rate of waste cooking oil is relatively low in China, and the waste cooking oil is a potential environmental pollution source. If not be treated properly, it will give rise to soil pollution and environmental pollution. To increase the recycling utilization rate of waste cooking oil in the asphalt pavement, the application of waste cooking oil used as asphalt modifier, asphalt regenerant and biological binder is discussed basing on the domestic and abroad research. Articles discussing "waste cooking oil and modified asphalt" or "waste cooking oil and aged asphalt" or "waste cooking oil and asphalt mixture" were searched for in Web of Science and www.cnki.net. The published articles from 2010 to 2022 were chosen and analyzed in this paper, especially published articles in the last five years. Only the articles form core journals were included in the above published articles. The ordinary performance and rheological properties of binder are analyzed, and the road performance of asphalt mixture is explained. The main factors influencing the performance of modified asphalt and asphalt mixture with waste cooking oil are discussed. The microscopic test, scanning electron microscopy and four fractions separation test are selected to measure the mechanism between waste cooking oil and asphalt binder, and the reaction between waste cooking oil and asphalt binder is illustrated. The existential problem in the current research and application field of waste cooking oil in asphalt pavement is proposed, which will provide reference for the widely application of waste cooking oil in asphalt pavement. The waste cooking oil is added into petroleum asphalt by the shearing process, after which the viscosity, stiffness, elastic recovery performance, temperature sensitivity, rutting resistance and resistance to fatigue cracking of modified asphalt with waste cooking oil are reduced. The above performance of modified asphalt with waste cooking oil is gradually declined with increasing the dosage of waste cooking oil. However, the low-temperature crack resistance, fatigue resistance and self-healing efficiency of modified asphalt with waste cooking oil are improved. Research on the Performance and Mechanism of Asphalt and Asphalt Wen et al. [13] has studied that the PG grades of other blend modified asphalt and found that they are reduced except the blend modified asphalt with PG76-22 base asphalt and the dosage of 10% waste cooking oil. The resistance to fatigue cracking of modified asphalt with waste cooking oil is reduced, which is decreased with the increase in the amount of waste cooking oil. Azahar et al. [14] has pointed out that the modified asphalt performance is significantly affected by the quality of waste cooking oil, which is attributed to the interaction bonding between the waste cooking oil and asphalt binder. After the waste cooking oil is chemically treated, the acid value reduces from 1.66 to 0.54 mL/g. Due to the increase in interaction bonding between the treated waste cooking oil particles and the particles in the asphalt binder, the softening point, viscosity, rutting resistance and aging index of blended asphalt with the above waste cooking oil are improved, but the penetration and temperature sensitivity of blended asphalt are reduced. Eriskin et al. [15] found that the softening point of modified asphalt with waste frying oil is decreased, but the penetration value is increased. So, the modified asphalt with waste frying oil is suggested to be used in a colder region. Wang et al. [16] has evaluated the aging performance of different asphalt binder by the four fractions separation test, gel permeation chromatography and thermo-gravimetric analysis. It is found that the dispersed system of modified asphalt is improved by the addition of bio-oil produced from waste cooking oil, and the bio-oil is used to balance the effect of aging. The aging performance of bio-oil modified asphalt prepared by waste cooking oil is inferior to petroleum asphalt, but the thermal stability of the above asphalt is essentially the same. The colloidal index and molecular weight of bio-oil modified asphalt are not increased with increasing the dosage of the waste cooking oil. The different dosage of waste cooking oil (5%, 10% and 15%) is added into 70# base asphalt and SBS-modified asphalt, and the failure stress of modified asphalt with waste cooking oil is reduced, but the failure strain and fatigue life at an intermediate temperature of modified asphalt with waste cooking oil are increased. In addition, the yield energy of modified asphalt with waste cooking oil is gradually declined upon increasing the amount of the waste cooking oil [17]. Sun et al. [18] has studied that the bio-oil form by-product of waste cooking oil and found that it has good compatibility with petroleum asphalt through the separation tendency test of polymer-modified asphalt; the softening point differences of different blended asphalt are less than 2.5 • C. The content of asphaltene in bio-oil is less than 1%, but the dosage of saturates and resins is higher than that of base asphalt. Maharaj and Singh-Ackbarali et al. [19,20] have found that the influence of waste cooking oil on the asphalt elasticity is closely related to the chemical composition of binder. Upon increasing the dosage of waste cooking oil, the phase angle of rock asphalt-modified asphalt with waste cooking oil is gradually increased, the phase angle of blended asphalt with waste cooking oil and petroleum asphalt is declined, and the phase angle of composite-modified asphalt with waste cooking oil is first decreased and then increased, which is shown in Figure 1. Ma et al. [21] has studied the preparation process, and the dosage of waste cooking oil residue of rubber asphalt modified with waste cooking oil residue are determined by an orthogonal design method, and the best preparation process is sheared for 2 h at 220 • C with 6.0% waste cooking oil residue. The storage stability of rubber asphalt is improved by the addition of waste cooking oil residue, which can help to promote the swelling and degradation of rubber particles (as shown in Figure 2), and the aging resistance of rubber asphalt with waste cooking oil residue is also improved. The interaction between crumb rubber and asphalt is reduced by the addition of waste cooking oil residue, and the waste cooking oil residue can increase the proportion of lubricant phase. Gökalp et al. [22] found that the waste vegetable cooking oil significantly decreases the softening point, viscosity and rutting factor of asphalt. The waste vegetable cooking oil can be utilized as an anti-aging agent of aged asphalt. Liu et al. [23] has pointed out that the storage stability of SBS/EVA-modified asphalt is improved by the addition of waste cooking oil, and the compatibility of the SBS and EVA is better when the dosage of waste cooking oil is 10%. Due to waste cooking oil being mainly composed by low-weight components, the light components of polymer-modified asphalt with waste cooking oil are supplemented, the content of medium molecular size is increased (as listed in Table 1), and the internal microscopic structure of asphalt becomes more smooth. Li et al. [24] has found that the optimal dosage of waste cooking oil in epoxy asphalt binder is determined as 4% by the consideration of viscosity, microstructure, damping behaviors and mechanical performance. The viscosity of the binder and particle size of the dispersed phase is reduced, the damping behavior and elongation at break are improved, and the construction time of the modified asphalt is extended to be long as 24%. The influence of waste cooking oil on the performance of asphalt is closely related to the base asphalt as well as the dosage and quality of waste cooking oil. for 2 h at 220 °C with 6.0% waste cooking oil residue. The storage stability of rubber asphalt is improved by the addition of waste cooking oil residue, which can help to promote the swelling and degradation of rubber particles (as shown in Figure 2), and the aging resistance of rubber asphalt with waste cooking oil residue is also improved. The interaction between crumb rubber and asphalt is reduced by the addition of waste cooking oil residue, and the waste cooking oil residue can increase the proportion of lubricant phase. Gökalp et al. [22] found that the waste vegetable cooking oil significantly decreases the softening point, viscosity and rutting factor of asphalt. The waste vegetable cooking oil can be utilized as an anti-aging agent of aged asphalt. Liu et al. [23] has pointed out that the storage stability of SBS/EVA-modified asphalt is improved by the addition of waste cooking oil, and the compatibility of the SBS and EVA is better when the dosage of waste cooking oil is 10%. Due to waste cooking oil being mainly composed by low-weight components, the light components of polymer-modified asphalt with waste cooking oil are supplemented, the content of medium molecular size is increased (as listed in Table 1), and the internal microscopic structure of asphalt becomes more smooth. Li et al. [24] has found that the optimal dosage of waste cooking oil in epoxy asphalt binder is determined as 4% by the consideration of viscosity, microstructure, damping behaviors and mechanical performance. The viscosity of the binder and particle size of the dispersed phase is reduced, the damping behavior and elongation at break are improved, and the construction time of the modified asphalt is extended to be long as 24%. The influence of waste cooking oil on the performance of asphalt is closely related to the base asphalt as well as the dosage and quality of waste cooking oil. for 2 h at 220 °C with 6.0% waste cooking oil residue. The storage stability of rubber asphalt is improved by the addition of waste cooking oil residue, which can help to promote the swelling and degradation of rubber particles (as shown in Figure 2), and the aging resistance of rubber asphalt with waste cooking oil residue is also improved. The interaction between crumb rubber and asphalt is reduced by the addition of waste cooking oil residue, and the waste cooking oil residue can increase the proportion of lubricant phase. Gökalp et al. [22] found that the waste vegetable cooking oil significantly decreases the softening point, viscosity and rutting factor of asphalt. The waste vegetable cooking oil can be utilized as an anti-aging agent of aged asphalt. Liu et al. [23] has pointed out that the storage stability of SBS/EVA-modified asphalt is improved by the addition of waste cooking oil, and the compatibility of the SBS and EVA is better when the dosage of waste cooking oil is 10%. Due to waste cooking oil being mainly composed by low-weight components, the light components of polymer-modified asphalt with waste cooking oil are supplemented, the content of medium molecular size is increased (as listed in Table 1), and the internal microscopic structure of asphalt becomes more smooth. Li et al. [24] has found that the optimal dosage of waste cooking oil in epoxy asphalt binder is determined as 4% by the consideration of viscosity, microstructure, damping behaviors and mechanical performance. The viscosity of the binder and particle size of the dispersed phase is reduced, the damping behavior and elongation at break are improved, and the construction time of the modified asphalt is extended to be long as 24%. The influence of waste cooking oil on the performance of asphalt is closely related to the base asphalt as well as the dosage and quality of waste cooking oil. Considering the overall performance of the asphalt binder, the dosage of waste cooking oil added into petroleum asphalt is generally lower than 10%. When the dosage of waste cooking oil is too high, the performance of the asphalt binder is damaged. Sun et al. [25,26] prepared the bio-asphalt with high content waste cooking oil residues and discussed the influence of different modifiers on the bio-asphalt performance. The study pointed out that the improvement effect of composite modifier with rock asphalt, hydrocarbon resin, lowdensity polyethylene and linear SBS polymer on bio-asphalt performance is relatively the best, and the optimum dosage of different modifiers is determined by the uniform design method. The high-temperature performance of the above bio-asphalt with a composite modifier is close to SBS-modified asphalt, the low-temperature performance of the above bio-asphalt is better, and the aging sensitivity of the above bio-asphalt is inferior to that of the 70# base asphalt. The physical property of partial waste cooking oil is listed in Table 2, and the other waste cooking oil is collected directly from restaurants. Table 2. Physical property of some waste cooking oil. Bio-asphalt Produced from waste cooking oil after undergoing the thermochemical process To achieve the obvious improvement effect of waste cooking oil on asphalt binder performance, the waste cooking oil should satisfy the following performance requirements [16][17][18]21,23,27]: (1) The waste cooking oil has good fluidity, the viscosity range of waste cooking oil should be 100~200 mPa·s at 25 • C; (2) The density difference between waste cooking oil and asphalt could not too large, and the density of waste cooking oil should be greater than 0.90 g/cm 3 at • C; (3) The dosage of impurity in waste cooking oil is low; (4) The moisture content of waste cooking oil should be low; otherwise, the security of asphalt with waste cooking oil cannot be guaranteed. Research on Performance of Modified Asphalt Mixture with Waste Cooking Oil A mixture of modified asphalt with bio-oil derived from waste cooking oil or bioasphalt derived from waste cooking oil is prepared, and the construction temperature of asphalt mixture is also decreased due to the reduction in viscosity of the binder [27]. The construction temperature of asphalt mixture with waste cooking oil is listed in Table 3. The dynamic modulus, rut resistance and fatigue resistance of asphalt mixture with waste cooking oil are decreased with increasing the amount of the waste cooking oil, the lowtemperature crack resistance of asphalt mixture with waste cooking oil is gradually improved, but there is no obvious correlation between the moisture susceptibility of asphalt mixture with waste cooking oil and the dosage of waste cooking oil. Meanwhile, the tensile strength ratio of asphalt mixture with the different dosages of bio-asphalt produced from waste cooking oil can all meet the requirements [13]. The density of asphalt mixture with untreated waste cooking oil or treated waste cooking oil is increased, which is because that the waste cooking oil has a lubricating function and good flowability. The frictional force between the aggregate and aggregate is reduced by the addition of waste cooking oil, so the asphalt mixture with waste cooking oil is easy to compact [31]. Eriskin et al. [15] has found that the optimum asphalt content, indirect tensile strength and self-healing temperature of modified bitumen mixture with waste flying oil are declined with the increasing dosage of waste flying oil, and the decrease in optimum asphalt content is up to 89%. There is a decline in the tensile strength ratio of modified bitumen mixture with the increasing dosage of waste flying oil, and the tensile strength ratio of the above mixtures can meet specification requirements. Azahar et al. [32] has studied the compactness, elastic modulus, indirect tensile strength and creep resistance of modified asphalt mixture with treated waste cooking oil and found that they are higher than the base asphalt mixture and modified asphalt mixture with untreated waste cooking oil, and the highest creep stiffness of modified asphalt mixture with treated waste cooking oil is improved about 25% compared with the base asphalt mixture. Niu et al. [33] has found that the Marshall stability, immersion Marshall residual, dynamic stability and tensile strength ratio of modified asphalt mixture with 5% waste cooking oil are all lower than those of the original asphalt mixture, but the performance of the modified asphalt mixture with 5% waste cooking oil is improved by the addition of ground tire rubber. The modified asphalt mixture with 5% waste cooking oil and 20% ground tire rubber has better rut resistance, water stability and low-temperature performance. Yan et al. [34] has evaluated the mechanical behaviors of asphalt mixture with Europan rock and waste cooking oil, and they found that the anti-cracking performance of asphalt mixture at low temperature can be improved by addition of waste cooking oil. Due to the addition of Europan rock, the anti-cracking performance of asphalt mixture is reduced, but the waste cooking oil made up for the adverse effect of Europan rock on the anti-cracking performance of the mixture. The different performance of asphalt mixture is significantly improved by adding the proper dosage of Europan rock and waste cooking oil. Hu et al. [35] has studied the noise reduction performance of porous asphalt mixture with Sasobit and waste cooking oil. The workability of high-viscosity asphalt rubber is improved by the addition of Sasobit and waste cooking oil, and the viscosity of high-viscosity asphalt rubber with waste cooking oil is lower. The damping performance of porous asphalt rubber mixture waste cooking oil is improved, so the vibration noise and friction noise of the above mixture is smaller. After the addition of waste cooking oil into an asphalt mixture, the temperature stability and strength of the asphalt mixture are markedly affected, but the influence of waste cooking oil on the moisture susceptibility of the asphalt mixture is relatively small. The partial performance of the asphalt mixture with waste cooking oil is damaged, and the waste cooking oil is not recommended to be added alone into asphalt mixture. To ensure the maximum performance of the asphalt mixture, the waste cooking oil and other modifiers (rubber powder, SBS and SBR) are blended into asphalt binder [33,34]. Function Mechanism between Waste Cooking Oil and Asphalt Binder The function mechanism between waste cooking oil and petroleum asphalt or polymer is measured by a microscopic test. The physical blend between waste cooking oil and base asphalt occurs though a transform infrared spectroscopy test, and the almost no chemical reaction happens. The new absorption peaks appear in the modified asphalt with dosages of 4% and 8% waste cooking oil, but the above absorption peak position is corresponding with the infrared spectrum of waste cooking oil, and it shows that no new absorption peak has appeared [18]. The infrared spectroscopy test result is listed in Table 4. Wang et al. [17] found that the functional groups of 70# base asphalt after the addition of waste cooking oil are nearly unchanged, and the sulfoxide index remains almost the same, but the carbonyl index is increased, which is listed in Table 5. Research confirms that there is no chemical reaction between waste cooking oil or waste cooking oil residue and polymer-modified asphalt, such as rubber-modified asphalt, SBS/EVA composite-modified asphalt and SBS-modified asphalt, but there are some differences in the absorption peak intensity of only a few functional groups [21,23,27]. Sun et al. [27] has found that specific functional groups (functional groups of carbonyl, carbon-oxygen band, methylene) have approximately linear relationships with the dosage of bio-oil from waste cooking oil (listed in Table 6), which means that the bio-oil and SBS-modified bitumen are mainly physically mixed. Liu et al. [23] has pointed out that the carbonyl index and sulfoxide index of SBS/EVA-modified asphalt are observed to be generally increased by the addition of waste cooking oil (listed in Table 7), and the absorption intensity of the peaks has changed, which may be the reason for the change of asphalt performance. The waste cooking oil as a low-viscosity material is used to regenerate aged asphalt, and the low-temperature crack resistance, fatigue performance, adhesion and construction workability of aged asphalt are improved. The ductility of aged asphalt is increased by the addition of waste cooking oil, but the rotational viscosity of rejuvenated asphalt binder with waste cooking oil is decreased. Yan et al. [36] found that the low-temperature performance of rejuvenated asphalt binder with 8% fried food waste oil is equivalent to that of virgin asphalt binder. The mixing temperature and paving temperature of rejuvenated asphalt binder with fried food waste oil are decreased. The study pointed out that the ductility and viscosity of aged asphalt are recovered by the addition of waste cooking oil, and the above indicators of aged asphalt with 3% waste cooking oil are close to those of the base asphalt level [37], as listed in Table 8. Wan et al. [38] pointed out that the flexibility at low temperature and crack resistance of rejuvenated-asphalt binder with waste cooking oil are improved, and the improvement effect of the above asphalt is better than that of waste lubricating oil, which is listed in Table 9. Yang et al. [39] found that the waste cooking oil and WROB modified rejuvenator with waste tire crumb rubber and waste cooking oil are conducive to the low-temperature anti-cracking performance of aged asphalt, but the WROB-modified rejuvenator has superior performance in recovering the low-temperature crack resistance ability of aged asphalt, as listed in Table 10. Uz et al. [40] pointed out that viscosity of aged asphalt with 6% waste vegetable cooking oils is close to that of base asphalt. Li et al. [41,42] found that the waste cooking oil has a significant influence on the viscosity of recycled asphalt, and the viscosity of recycled asphalt is decreased with the increase in dosage of waste cooking oil. The study points that the ductility and viscosity of rejuvenated asphalt with waste cooking oil can be recovered to the level of base asphalt, which is related to the dosage and macromolecular substances of waste cooking oil [43,44]. Bilema and Sun et al. [45,46] pointed out that the ductility of aged asphalt is increased and the viscosity of aged asphalt is decreased by addition of waste cooking oil, which is closely related to type and dosage of waste cooking oil. Ma et al. [47] found that the asphalt with waste bio-oil has a better low-temperature performance, and the high-temperature performance of asphalt with waste bio-oil is improved by the addition of Iran rock asphalt. Bilema et al. [48] pointed out that the workability of reclaimed asphalt is improved by the addition of waste frying oil and crumb rubber. Ren et al. [49] found that the addition of waste cooking oil remarkably can restore the viscosity and strengthen the low-temperature performance of rejuvenated asphalt. However, the other performance of rejuvenated asphalt is damaged, so the waste cooking oil and styrene butadiene rubber are used to recover performance of aged asphalt. The study points out that the viscosity of the aged asphalt with waste cooking oil is more effectively restored, and the viscosity of aged asphalt with a reasonable dose of waste cooking oil can be reduced by 60.3% and 52.5% [50]. Zhao et al. [51,52] pointed out that the modification effect of waste cooking oil on low-temperature performance is related to the components of waste cooking oil, and the light components of waste cooking oil have a limited recovery effect on the lowtemperature ductility of aging asphalt. Lai et al. [53] found that the addition of waste cooking oil can reduce the risk of the material cracking under extremely cold conditions, but the ductility of rejuvenated asphalt binder is increased at first and then decreased with increasing the amount of waste cooking oil. When the waste edible vegetable oil is, respectively, added into the SBS-modified asphalt, AH-70#asphalt and AH-50# asphalt, there is a decrease in the viscosity of different aged asphalt (aged SBS-modified asphalt, aged AH-50# asphalt and aged AH-70#asphalt), and there is an increase in the ductility of different asphalt (aged AH-70#asphalt, aged AH-50# asphalt and aged SBS-modified asphalt) [4,54]. Zhang et al. [55] pointed out that the performance of rejuvenated asphalt is affected markedly by the quality of waste cooking oil, and especially the aging performance and crack resistance of rejuvenated asphalt are closely related to the acid value and viscosity of waste cooking oil. The acid value and viscosity of waste cooking oil are lower the better the regeneration effect of aged asphalt is. The study found with the increase in waste oil contents, the viscosity of aged asphalt is gradually decreased. The waste cooking oil has the most obvious effect on viscosity reduction, and the viscosity of aged asphalt with the highest dosage of waste cooking oil is 54.4% lower than that of aged asphalt [56]. The partial test results are listed in Table 11. Softening point of aged asphalt with waste soybean oil is deceased and ductility is increased. The recover effect of aged asphalt performance is related to the aging degree of soybean oil. The high-temperature performance and viscosity of aged asphalt with waste cooking vegetable oils are decreased with the increase in soybean oil content, and ductility is increased. The recovering efficiency of aged asphalt is related to the aging time of asphalt and dosage of waste oil. Change of Performance and Strength Tang et al. [65] Donghai-50# asphalt Waste oil from purification of soybean oil, 5%, 10%, 15% and 20% Penetration is increased and the change range of penetration is 38~104 (0.1 mm), softening point is reduced and change range of softening point is 58.8~45.5 • C, and ductility is increased and change range of viscosity is 5.8~35 cm. Ji et al. [66] 70# asphalt Waste soybean oil, 4% The fatigue resistance and elastic recovery performance of 70# rejuvenated asphalt with waste soybean oil are reduced after ultraviolet aging. The construction safety, high-temperature stability and anti-aging property of aged asphalt are reduced by the addition of waste cooking oil. The performance of rejuvenated asphalt binder is closely related to the type of aged asphalt and waste cooking oil, aging degree of binder and waste oil. The influence of waste cooking oil on the performance of aged asphalt is summarized in Table 11. The flash point of rejuvenated asphalt with different dosages of waste cooking oil is decreased, which shows that the waste cooking oil is harmful to the safety of aged asphalt. However, the flash point of rejuvenated asphalt with 20% waste cooking oil or 3% waste vegetable oils is still higher than 230 • C [62,64]. The waste edible vegetable oil is added into different asphalt, AH-70# asphalt and AH-50# asphalt after spending time in a pressurized aging vessel, after which the increase in penetration for aged AH-70# asphalt, aged SBS-modified asphalt and aged AH-50# asphalt is gradually decreased. In addition, the influence of waste edible vegetable oil on the thermo-physical properties of the above aged asphalt is evidently different [54]. Zheng [59] found that the regeneration effect of aged asphalt is significantly influenced by the aging degree of waste soybean oil. When the aged soybean oil with a viscosity is less than 1792 mPa.s is added into the aged asphalt, the lost light component of aged asphalt can be effectively supplemented. Ji et al. [60] studied the influence of different rejuvenators with waste cooking vegetable oil on aged asphalt performance and found that the influence of waste corn oil on the penetration and softening point is lower than waste soybean oil, and the high temperature performance of recovered asphalt binder with waste corn oil are better than waste soybean oil. Li et al. [68] found that there is a difference in the regeneration effect of waste soybean oil on the asphalt with different aging degrees; when the dosage of 1~3% waste soybean oil is added into 70# asphalt with the aging time for 5 [66] found that the influence of ultraviolet aging on the fatigue resistance and elastic recovery performance of the 70#-rejuvenated asphalt is significantly higher than that of the SBS-rejuvenated asphalt. The three different types of waste fried oil with a mass ratio of 4~12% are blended into 70# aged asphalt, and the influence effect of the selected waste oil on the recovered asphalt binder performance is different. The improvement effect of the selected waste oil on the high-temperature performance of the recovered asphalt binder is: waste cooking oil 2 > waste cooking oil 3 > waste cooking oil 1 [69], which is listed in Table 12. Matolia et al. [70] pointed out that the high-temperature performance and viscosity of different aged asphalt are reduced by the addition of waste vegetable oil, and the recovery effect is relate to the aging degree of asphalt and dosage of waste vegetable oil. The surface free energy of aged asphalt with waste vegetable oil is improved. The dosage of 2~10% waste vegetable cooking oil is added into the asphalt after the short-term aging and the long-term aging, the changing range of penetration for rejuvenated asphalt after the short-term aging is higher than the rejuvenated asphalt after the long-term aging, but the softening point of rejuvenated asphalt after the long-term aging has a greater change [40]. The study points out that waste pig fat has good compatibility with aged asphalt, and the durability of aged asphalt is increased by the addition of waste pig fat [71]. Yan et al. [72,73] found that the adhesion of asphalt and aggregate is decreased due to the aging process, but the surface free energy of aged asphalt is improved by the addition of waste cooking oil. The adhesion work of asphalt and selected aggregate is different, which is listed in Table 13. The dosage of waste cooking oil in rejuvenated asphalt does not adhere to the concept: "the more the better". The high-temperature performance and shear rheologic property of rejuvenated asphalt with an appropriate dosage of waste cooking oil can be equal to base asphalt, but the conclusion is not uniform whether the viscosity and low-temperature performance of the above rejuvenated asphalt can be recovered to the base asphalt level. There is a difference in the optimum dosage of waste cooking oil in aged asphalt determined by the conventional performance indexes or rheological property indexes, as listed in Tables 14-16, which may be related to the test method, grade and aging degree of asphalt binder, and the type, quality and viscosity of waste cooking oil. Man [74] found that the regeneration effect of a waste vegetable oil regenerator on aged asphalt is slightly inferior to that of a traditional rejuvenating agent, but the optimum dosage of the waste vegetable oil regenerator is significantly lower when the aged asphalt performance is recovered. Table 15. Optimum content of waste oil based on determining different physical properties [70]. The low-temperature crack resistance of the aged asphalt mixture is improved with the increasing of waste cooking oil content. The maximum bending strain of the aged asphalt mixture with waste cooking oil contents of 0, 4%, 8% and 12% is 2633.32 µε, 2879.25 µε, 3087.44 µε and 3246.13µε [53]. Yan et al. [76] found that after the addition of waste cooking oil into an aged asphalt mixture, the indirect tensile strength is increased, but the failure strain is decreased. Determining Optimum Content of Waste Oil With the increasing dosage of waste cooking oil, the void ratio, Marshall stability, flow value, indirect tensile strength and fatigue life of a rejuvenated asphalt mixture with waste cooking oil are reduced. The permanent deformation resistance, freeze-thaw splitting strength ratio and residual stability of the above asphalt mixture are first increased and then decreased, and the moisture susceptibility of the above asphalt mixture is evidently improved [63,74,77,78]. Sun et al. [67] found that the Marshall stability of waste vegetable oil rejuvenated asphalt mixture is slightly lower than that of the base asphalt mixture, but the flow value of waste vegetable oil rejuvenated asphalt mixture is higher by about 35%. Yan et al. [76] found that the temperature stability, moisture susceptibility and antiaging performance of a rejuvenated asphalt mixture with tung oil are better than those with waste cooking oil, but the fatigue performance of the mixture with waste cooking oil is better. The tensile strength ratio and Marshall stability of an asphalt mixture are decreased significantly after accelerated aging, and the moisture susceptibility cannot meet specification requirements. The moisture susceptibility of an aged asphalt mixture is improved by the addition of 12% waste cooking oil, but the Marshall stability still could not meet specification requirements [78]. Ziari et al. [79] pointed out that the rutting resistance of a rejuvenated mixture with waste cooking oil or waste engine oil is decreased, which is attributed to the lubricant effect of waste cooking oil and waste engine oil. To rectify the negative effect of waste cooking oil or waste engine oil on the rutting resistance of a rejuvenated mixture, 25% crumb rubber is blended with waste cooking oil and waste engine oil to produce modified waste cooking oil and modified waste engine oil. The viscosity of modified waste cooking oil and modified waste engine oil is increased, so the rutting resistance of the rejuvenated mixture with modified waste oil is improved. When the performance of a rejuvenated asphalt mixture with waste oil can meet specification requirements, the dosage of waste cooking oil in rejuvenated asphalt mixture is generally lower than that of waste engine oil. The density, Marshall stability, flow value, indirect tensile strength and freeze-thaw splitting strength ratio of a rejuvenated asphalt mixture with waste cooking oil are better than those with waste engine oil [63,78]. However, the dosage of waste cooking oil is the same as that in the rejuvenated asphalt mixture with waste engine oil, and the permanent deformation of the waste engine oil rejuvenated asphalt mixture is lower [78]. Compared to the rejuvenated asphalt mixture with a commercial rejuvenating agent, the performance of a waste cooking oil-rejuvenated asphalt mixture is not necessarily better, which is listed in Table 16. Researcher Performance or Strength of Different Rejuvenated Asphalt Man [80] High-temperature performance and moisture susceptibility: waste cooking oil > RPO rejuvenated asphalt mixture, low-temperature performance: waste cooking oil < RPO rejuvenated asphalt mixture Hassan Ziari et al. [81] Rutting resistance: waste cooking oil > cyclogen and Rapiol rejuvenated asphalt mixture, low-temperature performance and moisture susceptibility: closely approximate Mamun et al. [82] Indirect tensile strength and resilience modulus: waste cooking oil > SAE-10 rejuvenated asphalt mixture, loss rate of indirect tensile strength: little difference The aged asphalt performance is closely related to the physical properties of waste cooking oil. To achieve good performance of rejuvenated-asphalt binder and mixture with waste cooking oil, the waste cooking oil used as rejuvenating agent should satisfy the following technical requirements [4,54,62,64,65]: (1) The viscosity of waste cooking oil is no more than 1.8 Pa.s at 25 • C, which can finely dispersed in aged asphalt; (2) The density of waste cooking oil is 0.90~1.0 g/cm 3 at 15 • C, which can obtain better compatibility between aged asphalt and waste cooking oil; (3) Good aging resistance, heat stability and weather resistance; (4) Clean without impurities, and the impurities should be removed. Analysis on Function Mechanism between Waste Cooking Oil and Aged Asphalt There is no chemical reaction between waste cooking oil and aged asphalt. The high polar sulfoxide group of aged asphalt is diluted by the addition of waste cooking oil, and the dosage of macromolecules and small molecules in aged asphalt are reduced, which helps to improve the dispersion of molecular weight for aged asphalt [55,58,61,64,65,68]. Meanwhile, the four components of aged asphalt are also changed by the addition of a rejuvenating agent obtained by waste cooking oil, but its colloidal structure is not changed. Zhang and Matolia et.al [55,70] have found that the content of asphaltene and resin of rejuvenated asphalt are reduced compared to that of the aged asphalt, and the saturates fraction and aromatic fraction of rejuvenated asphalt are increased. However, some researchers have pointed out that the content of saturated fraction and resin of regenerated asphalt are increased, and the content of asphaltene and aromatic fraction of regenerated asphalt are declined [54,68], which is listed in Table 17. The structure of regenerated asphalt is the same as that of the aged asphalt, and their main non-electrochemistry displacement is basically similar, but the chemical electrical property of regenerated asphalt is only changed. The electrochemical displacement of aged asphalt and regenerated asphalt is −56~22 ppm and 10~60 ppm, and the light components are increased by the waste cooking oil [67]. The functional group of aged asphalt has a high similarity with rejuvenated asphalt with waste cooking oil though the infrared spectroscopy test, but the intensity of functional groups for aged asphalt at about 1600 cm −1 and 1030 cm −1 is different from that of the rejuvenated asphalt with waste cooking oil. This shows that the intensity of the above functional groups of rejuvenated asphalt with waste cooking oil is decreased markedly [58,64,68], as is listed in Tables 18 and 19. Adding waste cooking oil into the aged asphalt, the ratio of asphaltenes to maltenes can be reduced but cannot restored. The microscopic image of aged asphalt and rejuvenated asphalt with waste cooking oil is observed by the scanning electron microscopy (SEM), and the researchers found that the spread bores with different sizes on the surface morphology of the waste cooking oil rejuvenated asphalt disappeared, and the morphological characteristics of waste cooking oil rejuvenated asphalt became relatively smooth [63], which is shown in Figure 3. However, Li et al. [68] found that the micro-morphology of aged asphalt and rejuvenated asphalt has no remarkably difference, and some little tiny white points appeared in the rejuvenated asphalt surface. The microcosmic appearance of rejuvenated asphalt is apparently no longer homogeneous, which is shown in Figure 4. Compared to aged asphalt, the components composition of rejuvenated asphalt has changed. The property changes of asphalt are related to the components composition. The resins and saturated content of rejuvenated asphalt are increased, so the ductility of asphalt is improved and viscosity is decreased. The softening point, viscosity and ductility of aged asphalt are recovered by the addition of waste cooking oil, which shows that the frictional resistance and low-temperature performance of rejuvenated asphalt are improved. [58,64,68], as is listed in Tables 18 and 19. Adding waste cooking oil into the aged asphalt, the ratio of asphaltenes to maltenes can be reduced but cannot restored. The microscopic image of aged asphalt and rejuvenated asphalt with waste cooking oil is observed by the scanning electron microscopy (SEM), and the researchers found that the spread bores with different sizes on the surface morphology of the waste cooking oil rejuvenated asphalt disappeared, and the morphological characteristics of waste cooking oil rejuvenated asphalt became relatively smooth [63], which is shown in Figure 3. However, Li et al. [68] found that the micro-morphology of aged asphalt and rejuvenated asphalt has no remarkably difference, and some little tiny white points appeared in the rejuvenated asphalt surface. The microcosmic appearance of rejuvenated asphalt is apparently no longer homogeneous, which is shown in Figure 4. Compared to aged asphalt, the components composition of rejuvenated asphalt has changed. The property changes of asphalt are related to the components composition. The resins and saturated content of rejuvenated asphalt are increased, so the ductility of asphalt is improved and viscosity is decreased. The softening point, viscosity and ductility of aged asphalt are recovered by the addition of waste cooking oil, which shows that the frictional resistance and low-temperature performance of rejuvenated asphalt are improved. Discussion In this paper, the waste cooking oil can be used as a modifier and regenerating agent applied in asphalt binder. The influence of waste cooking oil on asphalt and asphalt mixture performance is discussed, and the influence factor of asphalt with waste cooking oil is also analyzed. The performance of asphalt with waste cooking oil is related to the asphalt type, content and physical property of waste cooking oil. The low-temperature crack resistance, fatigue resistance, workability and self-healing efficiency of asphalt with waste cooking oil are improved, but the high-temperature performance of asphalt with waste cooking oil is reduced. The performance changes of asphalt mixture with waste cooking oil are similar to those of asphalt with waste cooking oil. Methods for improving the performance of asphalt and asphalt mixture with waste cooking oil are suggested as areas of further study, which can increase the applied range of asphalt with waste cooking oil. The regeneration efficiency of waste cooking oil used in aged asphalt is good. The performance of rejuvenated asphalt with waste cooking oil is nearly comparable to that of petroleumbased rejuvenator. The consumption of petroleum-based rejuvenator is reduced, which can accord with sustainable development. More waste cooking oil is encouraged to be collected and used to recover the performance of aged asphalt. Upon consideration of the influence of waste cooking oil on rejuvenated asphalt and asphalt mixture performance, the waste cooking oil should satisfy the technical requirements, such as viscosity, density and cleanliness. The method of distinguishing quality for waste cooking oil is proposed. After the waste cooking oil is collected, the waste cooking oil is distilled to remove moisture. The physical indicators of treated waste cooking oil are tested, and it is blended with virgin asphalt or aged asphalt according to the same preparation process. Based on the basic performance and compatibility of asphalt with waste cooking oil, such as the low-temperature performance, viscosity, adhesion and high-temperature performance, the waste cooking oil can be classified into different levels. Further Research Interests The application feasibility of WCO into asphalt and asphalt mixture was analyzed in the above study, and the influence of WCO on the performance of asphalt and asphalt mixture was discussed, which was conducive to reveal the mechanism of WCO and asphalt binder. The following aspects will need to be further researched. The type and quality of WCO are not mentioned in the above study. Soybean oil, peanut oil and rapeseed oil are usually eaten, but the type and aging degree of WCO from the above oils are not explicitly discussed. This is may be the important reason for the performance differences of different rejuvenated asphalt and blends composed by WCO and binder, and the standardized application of WCO into asphalt and asphalt mixture will not be realized. Therefore, the influence of type and quality of WCO on asphalt and asphalt mixture performance will be studied; the performance difference of selected WCO is analyzed, and the quantitative quality control parameters of WCO will be proposed. The research on asphalt and asphalt mixture with WCO is still in a laboratorial testing stage at present, and the asphalt and asphalt mixture with WCO have not been applied in road projects. The influence of WCO on asphalt and asphalt mixture performance has not been uniform, and the modified asphalt with WCO or WCO rejuvenated asphalt binder performance may not work under the coupling effects of natural environment and vehicle load. Therefore, the improvement of composite technology on modified asphalt with WCO or WCO rejuvenated asphalt binder performance will be suggested to be conducted, which will improve the overall performance of the above asphalt, and the economy of different technologies will be also considered. Ultimately, the WCO will be realized with resourceful utilization into asphalt pavement. Conclusions The low-temperature crack resistance and construction workability of modified asphalt with waste cooking oil are improved. However, the stiffness, elasticity recovery, temperature sensitivity, rutting resistance and resistance to fatigue cracking of modified asphalt with waste cooking oil are reduced. The type of base asphalt, dosage and quality of waste cooking oil have a significant influence on the modified asphalt performance. The dosage of waste cooking oil added into petroleum asphalt is generally lower than 10%. The compatibility between base asphalt and different polymer is improved by the addition of waste cooking oil, the storage stability of polymer-modified asphalt with waste cooking oil is enhanced, and the light components of polymer-modified asphalt are increased. The construction temperature, optimum asphalt content, indirect tensile strength, dynamic modulus, rut resistance and fatigue resistance of rejuvenated asphalt mixture with waste cooking oil are reduced, but the low-temperature crack resistance of rejuvenated asphalt mixture with waste cooking oil is improved. The construction temperature of waste cooking oil rejuvenated asphalt mixture is reduced by up to 15 • C, and the moisture susceptibility of rejuvenated asphalt mixture is less influenced by waste cooking oil. Performance of the rejuvenated asphalt mixture with waste cooking oil is closely related to the type and dosage of waste cooking oil, aged asphalt type, and aging degree of asphalt and waste cooking oil. However, the dosage of waste cooking oil in aged asphalt is not the more the better, and the optimum content of waste cooking oil is 1.0~15.0%.The hightemperature performance and shear rheologic property of aged asphalt with an appropriate dosage of waste cooking oil are recovered. The waste cooking oil is blended with base asphalt or aged asphalt; there is no chemical reaction between waste cooking oil and asphalt. However, there are differences in the absorption peak intensity of the individual functional group in modified asphalt with waste cooking oil. Meanwhile the structure of rejuvenated asphalt with waste cooking oil is the same as that of the aged asphalt, the asphaltene components of select asphalt are changed, and the content of saturated fraction is increased.
2023-02-08T16:02:14.543Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "437cd3374f5637855c4b7fd3b58fdd02d093a23e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/4/1341/pdf?version=1675652908", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85ba66ae8a6a90b30a1517367e54d91229f9f11b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
253614841
pes2o/s2orc
v3-fos-license
Protocol for base resolution mapping of ac4C using RedaC:T-seq Summary N4-acetylcytidine (ac4C) is an mRNA modification catalyzed by the enzyme N-acetyltransferase 10 (NAT10), with position-dependent effects on mRNA translation. This protocol details a procedure to map ac4C at base resolution using NaBH4-induced reduction of ac4C and conversion to thymidine followed by sequencing (RedaC:T-seq). Total RNA is ribodepleted and then treated with NaBH4 to reduce ac4C to tetrahydro-ac4C, which specifically alters base pairing during cDNA synthesis, allowing the detection of ac4C at positions called as thymidine following Illumina sequencing. For complete details on the use and execution of this protocol, please refer to Arango et al. (2022).1 KEY RESOURCES Pause point: Ribodepleted RNA can be stored at À80 C for prolonged periods of time. However, we moved to the next step immediately. Note: rRNA ribodepletion efficiency can be verified through RT-qPCR or by running 1 mL of sample in a bioanalyzer using the Agilent RNA 6000 kit (Figure 1). Approximately 100 ng ribodepleted RNA is expected. Note: NaBH 4 is an alkaline solution and induces RNA fragmentation to $100 nt at 55 C ( Figure 3). Optional: Reduction of ac4C with NaBH 4 may also be performed at 37 C. However, an extra step of RNA fragmentation needs to be included when using temperatures lower than 55 C. While not performed in this protocol, in case further RNA fragmentation is needed, we use the NEBNext Magnesium RNA Fragmentation Module following the manufacturer's instructions. Note: To avoid further freeze and thaw of the NaBH 4 -treated RNA, we recommend moving directly to the next step. Library preparation Timing: 4 h OPEN ACCESS Note: This step performs cDNA synthesis from NaBH 4 -treated RNA, followed by adapter ligation and PCR amplification. We used the NEBNextâ UltraIIä Directional RNA Library Prep Kit for library preparation. Note: After ribodepletion, NaBH 4 treatment and all isolation steps, we obtained $10-40 ng of RNA material. NEB recommends using 1 ng-100 ng of ribodepleted RNA for library preparation. We used 10 ng of RNA (Table 1). a. To each sample of 5 mL NaBH 4 -treated RNA, add 1 mL of 50 mM NEBNext Random primers. (Provided with Kit). b. Incubate the sample at 65 C for 5 min, with a heated lid set at 105 C. i. Hold at 4 C. c. To each sample (6 mL), add the following components and mix by gentle pipetting: d. Incubate samples (20 mL total volume) in a preheated thermal cycler (with the heated lid set at 105 C) as follows: Note: The temperature is elevated compared to the recommended by NEB. We have observed that elevating the temperature of reverse transcriptases increases C>T conversion upon NaBH 4 treatment. See the ''troubleshooting'' section for additional comments. Alternatives: Several reverse transcriptases can be used, including TGIRT, Superscript III, and AMV. 2 An adapter ligation to the 3 0 end of RNAs can be performed, followed by reverse transcription using a cDNA primer specific to the adapter. However, we have observed that adapter ligation is inefficient in NaBH 4 -treated RNA resulting in very poor library yield. 9. Second strand cDNA synthesis. a. Add the following reagents to the first strand synthesis reactions: Figure 4. Quality control step: Checking the efficiency of C>T conversion by PCR Timing: 2 days Note: This step is required to estimate the efficiency of C>T in a conserved ac4C site in 18S rRNA. A pair of primers surrounding position 1842 in 18S rRNA is used to amplify a region that contains an ac4C site at 100% stoichiometry. The residual amount of rRNA in the libraries is enough to perform this quality control. Following PCR, amplicons are analyzed by Sanger sequencing. We typically observed $50% C>T efficiency in position 1842 ( Figure 5). While the PCR and amplicon purification takes $2 h, sending the samples for Sanger sequencing and analyzing the data can take up to two days. 18. To 0.5 mL of libraries, add the following components and mix by gentle pipetting: Reagent Amount NEBNext Ultra II Q5 Master Mix 12.5 mL iii. Repeat for a total of 2 washing steps. e. Spin the tubes and put back into the magnetic rack. f. Completely remove the residual ethanol and air dry beads for 5 min. g. Elute DNA target from the beads with 12 mL 0.1 TE buffer. h. Incubate for 2 min at 22 C-25 C. i. Without disturbing the bead pellet, transfer 11 mL of the supernatant to a clean PCR tube and store at À20 C. j. Send samples for Sanger sequencing using the 18S rRNA helix 45 Primer F (10 mM) -5 0 CGCTACTACCGATTGGATGG 3 0 k. Align and verify C>T conversion at the ac4C site in position 1842 ( Figure 5). Library sequencing Timing: 2-6 days Instrument run time (HiSeq 2500): 40 h (rapid run mode) to 6 days (high output mode). Note: Illumina's two-channel instruments have different baseline error profiles than their fourchannel instruments. This protocol describes data produced with four-channel chemistry. It is critical that data that are compared are produced on the same instrument to ensure mismatch differences detected are not due to differences in chemistry. 26. Sort and index the alignments, merge alignments as desired. Timing: 2.5 days Note: This step will produce a pileup, or summary of coverage and base calls by position, across samples. Conversion to interpretable read counts is performed by the mpileup2readcounts script, which enforces additional quality criteria on alignments. Before beginning, install this script by following the instructions at (https://github.com/IARCbioinfo/ mpileup2readcounts). Place the executable in your working directory or a directory in your $PATH. 27. Run the mpileup command and pipe to the mpileup2readcounts script. a. This example runs this on three samples, called -wildtype (WT) NaBH 4 treated, KO (NAT10 À/À ) NaBH 4 treated, and WT Untreated: Optional: To reduce downstream compute time, you may restrict output to positions with a minimum depth, with an additional pipe, as this enforces a depth of 10 in each sample: 28. Parse the output to produce tidier results for comparing mismatch rates. a. This parsing script is available at Github: https://github.com/dsturg/RedaCT-Seq. b. Usage: redact_parse_script.pl [starting file] [number of samples]. Quantification and statistical analysis Timing: 2 h Note: Following the generation of base calling summaries via pileup, the next step is to load and process these data in the R environment. An example workflow with sample data is provided at Github: https://github.com/dsturg/RedaCT-Seq. The timing estimate above reflects computational run time along with consideration of diagnostic plots within the workflow. The workflow consists of 4 major steps, described below: 1. Calculation of mismatch rates at each queried position, for each sample. For each mismatch relative to the reference genome, the mismatch rate is calculated as: 2. Projection of genomic coordinates into transcript coordinates. Candidate converted sites are projected onto reference transcripts using functions in the Genomation, GenomicFeatures, and Rtracklayer packages. [7][8][9] 3. QC and determination of candidate modified sites. a. Before statistical testing, screening of sites is performed to ensure specificity of transcript assignment, absence of polymorphism, and mismatch rate above sequencing error: i. Mismatch rate in untreated control < 1%. ii. Mapping to a single reference transcript. iii. Absence of multiple mismatch types at the same position. iv. Mismatch rate elevated relative to untreated sample. 4. Statistical testing, thresholding, and exploratory plots. Note: Statistical testing is performed on mismatch and reference base calls between NaBH 4 treated WT and NAT10À/À samples. To perform this test, 2 3 2 matrices are constructed for each relevant site, using the data: Mismatched base counts (WT), Reference base counts (WT), Mismatched base counts (NAT10À/À), Reference base counts (NAT10À/À). Fisher's Exact Tests are performed in R as: Final selection of sites uses criteria on magnitude of difference of mismatch rate (as measured by fold change), in addition to the p-value. EXPECTED OUTCOMES Following the procedure described above, where non-C>T mismatches are included in the analysis as quality control, we expect C>T mismatches to be most highly represented, and mismatch rates to be elevated in the WT sample (as in Figure 6A). Mismatch rates at individual mRNA sites will cover a range of values reflecting differences in stoichiometry ( Figure 6B), with a maximum that reflects the conversion efficiency of the experiment. This can be assessed by observing mismatch rates in a positive control. In the HeLa transcriptome, mismatch rates at acetylated sites covered a broad range, but generally plateaued at 25%. This maximum reflects the conversion rate we observed at the 100% acetylated 18S rRNA site at position 1842. An example acetylated site is shown in Figure 6B. The total number of acetylated locations is dependent on the sample, conversion efficiency, and sequencing depth. With the depth and conditions we describe here, we detected 7,851 acetylated locations. 1 The total ratio of ac4C to C in the transcriptome can be estimated by comparing the total C>T mismatches to the sequencing depth at reference cytidines, after applying a minimum depth threshold (for example, 103 coverage). In the HeLa transcriptome, we used this approach to estimate total ac4C:C at 0.016%. 1 LIMITATIONS NaBH 4 can react with other nucleobases, including 7-methylguanosine, dihydrouridine, 3-methylcytidine, and wybutosine, 10-12 potentially producing mismatches unrelated to ac4C. To accurately call ac4C sites, a NAT10 À/À sample must be used. Using the analysis routine described above, RedaC:T-seq analysis filters out non-specific mismatches and detects only NAT10-mediated sites. One limitation thus relates to obtaining NAT10 À/À samples, especially when working with primary cells or tissues. In such cases, chemical deacetylation of RNA in mild alkaline conditions may be used. 13 While not included in this study, we recommend spiking in samples with acetylated RNA probes containing ac4C at known positions and stoichiometries. Probes will aid in the absolute quantitation of ac4C and help control for reduction variability across different samples. We also recommend using OPEN ACCESS unique molecular identifiers (UMIs) to filter duplicated reads and reduce artifacts related to sequencing errors. With the non-targeted approach that we describe, we avoid potential selection bias arising from the targeting/enrichment technique. However, this creates a limitation with regards to depth requirements. In an RNA pool with heterogeneous representation, acetylated sites with low stoichiometry on low expressed transcripts will be under-detected. For this reason, efforts should be directed toward maximizing sequencing depth to achieve the best detection. In our HeLa whole transcriptome experiment, we obtained greater than 200 million reads (100 million mate pairs) per replicate, for greater than 400 million reads (200 million mate pairs) per sample type. Additionally, high depth in a control untreated sample is important for evaluating the relevance of low mismatch rates. Successful nucleotide conversion and completed reverse-transcription are critical for the success of our approach. Adoption of this protocol for another modification or condition that induces RT stops would fail to identify modified locations. We found no evidence of induction of RT-stops at ac4C locations in our data, via searching for ''coverage cliffs'' or biases in read offset positions. TROUBLESHOOTING Problem 1 Low C>T conversion rate in positive control (Related to NaBH 4 ). Potential solution Make sure to use newly prepared NaBH 4 . Problem 2 Low C>T conversion rate in positive control (Related to reverse transcription). Cause: Improper reverse transcription conditions. Potential solution We have observed that elevating the temperature of reverse transcriptases increases C>T conversion upon NaBH 4 treatment. Thus, we recommend optimizing the reverse transcription temperature. This is particularly important when using a new reverse transcriptase. In addition, decreasing the concentration of GTP in the reaction increases the efficiency of C>T conversion. 2 Problem 3 Low library yield. Cause: Low starting RNA material, RNA degradation during ribodepletion, excessive RNA fragmentation during NaBH 4 treatment. Potential solution Check RNA integrity and concentration before you begin, after ribodepletion, and after NaBH 4 . If excessive loss of RNA is observed at any step, use more starting material. NEB recommends using 1 ng-100 ng of ribodepleted RNA for library preparation. We used 10 ng of RNA. Potential solution Optimize the concentration of adapter. NEB recommends dilutions as low as 1003 for low input procedures. If the DNA library yield is high but shows adapter contamination, perform another purification round using 0.93 AMPure beads. Problem 5 Insufficient memory to process data. Cause: Analyzing a whole transcriptomic dataset, including mismatch types that are not the expected nucleotides of interest, is valuable for troubleshooting. However, this may involve more data than can be processed in R, leading to memory errors or inability to execute code. Potential solution Data can be pre-filtered before loading into R, such as by minimum depth, mismatch type, or mismatch frequency. Moderate filtering will enable you to evaluate the data on this reduced dataset. The vector memory limit in R may also be increased, for example with the R_MAX_VSIZE command. Problem 6 Lack of enrichment of C>T mismatches compared to G>A. Cause: Misassignment of the strand for the mismatch. Potential solution Ensure that the transcript assignment section in the code workflow has been run. An effective way to diagnose potential alignment or strand issues is to visualize the alignments in the IGV genome browser. In the browser, reads can be color coded by strand, and mismatch rates presented in barplots (as in Figure 6B). Note that for mismatches to be highlighted in a barplot, you may need to adjust the allele frequency threshold for it to be visible, as this is commonly set to a high default value of 20%. RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Shalini Oberdoerffer (shalini.oberdoerffer@nih.gov). Materials availability This protocol did not generate new unique reagents. ACKNOWLEDGMENTS This protocol utilized the Center for Cancer Research (CCR) Sequencing Facility for the National Cancer Institute (NCI, Frederick, MD) for Illumina sequencing services. This protocol used the Biowulf Linux cluster at the National Institutes of Health, Bethesda, MD (https://hpc.nih.gov) to analyze sequencing data. This work is supported by the Intramural Research Program of NIH, Center for Cancer Research, National Cancer Institute. D.A. was supported by the NCI K99/R00 grant R00CA245035.
2022-11-18T16:05:31.736Z
2022-11-16T00:00:00.000
{ "year": 2022, "sha1": "c250db45e9dd3781a6f4c54a129d29714e02e79f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xpro.2022.101858", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b30f5643dc014aa909622d9d87df4e44e14da682", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
246000887
pes2o/s2orc
v3-fos-license
The prevalence of predialysis hyperkalemia and associated characteristics among hemodialysis patients: The RE‐UTILIZE study Abstract Introduction Hyperkalemia (HK), defined as serum potassium (K+) >5.0 mEq/L, is an independent predictor of mortality in patients on maintenance hemodialysis (HD). This study investigated the annual prevalence of HK and examined patient characteristics potentially associated with a higher annual HK prevalence. Methods This retrospective observational cohort study used Dialysis Outcomes and Practice Patterns Study (DOPPS) survey data from US patients undergoing in‐center HD thrice weekly from 2018 to 2019. The primary endpoint was the proportion of patients with any predialysis HK (K+ >5.0 mEq/L) within 1 year from the index date (date of DOPPS enrollment), using the first hyperkalemic K+ value. Secondary endpoints were the proportion of patients with moderate‐to‐severe (K+ >5.5 mEq/L) or severe (K+ >6.0 mEq/L) HK. Findings Overall, 9347 patients on HD were included in this analysis (58% male and 49% aged >66 years). Any predialysis HK (K+ >5.0 mEq/L) occurred in 74% of patients within 1 year of the index date, 52% within 3 months, and 38% within 1 month. The annual prevalence of moderate‐to‐severe and severe HK was 43% and 17%, respectively. Recurrent HK (at least two K+ >5.0 mEq/L within 1 year) occurred in 60% of patients, and 2.8% of patients were prescribed an oral K+ binder. Multivariable logistic regression analysis showed younger age, female sex, Hispanic ethnicity, and renin–angiotensin–aldosterone system inhibitor use were significantly associated with a higher annual prevalence of any predialysis HK, while Black race, obesity, recent initiation of HD, and dialysate K+ bath concentration ≥3 mEq/L were associated with a lower prevalence of HK. Discussion The annual prevalence of predialysis HK and recurrence were high among US patients on HD, whereas oral K+ binder use was low. Further studies are needed to understand the impact of dialysate K+ bath concentrations on predialysis HK among patients on HD. INTRODUCTION Hyperkalemia (HK), generally defined as serum potassium (K + ) concentrations of >5.0 mEq/L, is a common complication of chronic kidney disease (CKD), particularly in patients with end-stage renal disease (ESRD) receiving maintenance hemodialysis (HD). 1 In patients with CKD, HK is primarily caused by a decline in glomerular filtration rate and therefore reduced excretion of excess K + . 2 In patients on HD, the current management of HK includes reduction of dialysate K + concentrations, increased frequency of dialysis sessions, restriction of dietary K + , and avoidance of medications that increase serum K + levels. 3 Hyperkalemia is potentially life-threatening as it may cause ventricular arrhythmias and cardiac arrest. 1 In patients on HD, HK is an independent predictor of mortality. [4][5][6] Predialysis serum K + concentrations of ≥5.5, 4 ≥5.6, 5 and ≥5.7 mEq/L 1 have been associated with an increased risk of all-cause mortality. In addition, higher predialysis K + concentrations are associated with greater acute reductions in serum K + during and immediately after HD, 7 which may increase the risk of cardiac arrhythmia. 8 The reported prevalence of HK in patients on HD has varied between epidemiologic studies from different regions, including the United States (US) and Europe. 2,[9][10][11] Some of the variations may be due to differences in measured durations (e.g., monthly vs. annual prevalence), threshold laboratory K + values (e.g., >5.0 vs. >5.5 vs. >6.0 mEq/L), or number of laboratory K + values used to define HK (e.g., one vs. two). The Dialysis Outcomes and Practice Patterns Study (DOPPS) is a prospective cohort study investigating practice-related outcomes for patients on HD in >20 countries. Regularly updated, publicly available information on the monthly prevalence of HK is available from the DOPPS Practice Monitor, based on data from approximately 11,000 HD patients in >200 facilities. 12 More detailed information beyond the recent analyses is needed to estimate the annual prevalence of predialysis HK and to identify differences in HK prevalence between subgroups of patients on HD. It is unclear if the estimated prevalence of predialysis HK is impacted by the use of any one laboratory K + value to define HK (e.g., K + >5.0 mEq/L) any time over a 1-year follow-up period, and whether the estimated prevalence of HK differs depending on the day of the blood sampling, such as before the first (Monday/Tuesday), second (Wednesday/ Thursday), or third (Friday/Saturday) HD session. The aim of this study was to provide an expanded analysis of publicly available information to increase our understanding of the annual prevalence of HK over time and in various patient subgroups. This study also examined patient characteristics associated with higher annual prevalence of predialysis HK in HD patients. Study design and objectives RE-UTILIZE was a retrospective observational cohort study that used DOPPS survey data from US patients who initiated in-center thrice-weekly HD from 2018 to 2019 ( Figure 1). DOPPS is an international prospective study of adult patients (aged ≥18 years) treated with in-center HD. In each country, a sample of maintenance HD patients was randomly selected from a nationally representative sample of dialysis facilities. Anonymized data on demographics, laboratory values, dialysis history, K + binder use, and comorbidities were collected by a facility coordinator at each dialysis center using a standardized chart abstraction procedure. This study utilized a de-identified limited DOPPS data set for the US, pursuant to a data use and licensing agreement between AstraZeneca and Arbor Research Collaborative for Health. The objectives of the study were to: (1) describe the prevalence of HK (defined using the first predialysis K + >5.0 mEq/L any time over 1 year) in all patients on HD (primary); (2) identify patient characteristics associated with higher annual prevalence of HK (secondary); and (3) describe the prevalence of HK using the first predialysis K + value at the first, second, or third dialysis session; the prevalence of predialysis HK over a 1-or 3-month period; and the recurrence of predialysis HK over a 1-year period (exploratory). The study was considered exempt from Institutional Review Board approval as dictated by Title 45 Code of Federal Regulations, part 46 of the US, specifically 45 CFR 46.101(b) (4). In accordance with the Health Insurance Portability and Accountability Act Privacy Rule, disclosed DOPPS data were considered anonymized per 45 CFR 164.506(d) (2)(ii)(B) through the "Expert Determination" method; no individual patient information was reported. Study population Not all patients enrolled in DOPPS were included in the RE-UTILIZE study. To be included in the study, US-based patients undergoing in-center HD during 2018 through 2019 were required to have: (1) ≥1 year of enrollment in DOPPS; and (2) ≥1 nonmissing monthly laboratory K + value within 1 year of the index date. Therefore, all eligible patients were initially enrolled in DOPPS in 2018 or 2019; data from 2020 represented a partial year, and no patients enrolled in DOPPS in 2020 were included as fullyear 2020 data were not available at the time of data analysis. Patients without laboratory K + values or who had been enrolled in DOPPS for <1 year were excluded ( Figure 2). Study outcomes The primary endpoint was the proportion of patients who experienced any predialysis HK (K + >5.0 mEq/L) any time over a 1-year period from the index date, using the first monthly laboratory K + value that met the definition of HK. The predialysis K + values were obtained after the long or short interdialytic interval. The secondary endpoints were the proportion of patients experiencing moderate-to-severe HK (K + >5.5 mEq/L) or severe HK (K + >6.0 mEq/L) any time over a 1-year period from the index date, using the first laboratory K + value that met the respective definitions of moderate-to-severe or severe HK. The association between patient characteristics (i.e., age category, sex, race, ethnicity, diabetic ESRD as primary cause of dialysis, year of first dialysis at the DOPPS facility, and comorbidities) and the prevalence of HK was also examined. Exploratory endpoints were the recurrence of predialysis HK over 1 year, where recurrence was defined as at least two laboratory K + values meeting the HK definition after the first K + value met the HK definition, and the proportion of patients experiencing HK, moderate-to-severe HK, or severe HK any time within 1 and 3 months of the index date, and any time over a 1-year period using the first monthly K + value that met the definition of HK from the first (Monday/Tuesday), second (Wednesday/Thursday), and third (Friday/ Saturday) HD session. Although rare, some patients had blood sampling on Sundays. Statistical analysis Descriptive statistics (i.e., counts and percentages) were used to analyze the primary, secondary, and exploratory endpoints; no formal statistical hypotheses were tested for the primary and exploratory endpoints. In these analyses, median (interquartile range [IQR]) and mean AE standard deviation (SD) were used to express the time in days from index date to the date of the primary or secondary endpoint, and mean AE SD was used to describe K + values used to define HK. To minimize bias, the analysis was limited to patients with ≥1 non-missing monthly laboratory K + value so that the estimate of HK prevalence did not include patients without K + values. A sample size of 9347 patients was considered adequate to meet the study needs. As the primary endpoint measure was a proportion, the worst case scenario for precision would occur when the proportion was 50%. For an evaluable sample size of 9347 patients, the margin of error was 1.0%; no formal power considerations were needed. For the secondary endpoints, hypothesis testing was conducted using logistic regression models to analyze patient characteristics associated with the annual prevalence of HK. Adjusted odds ratios (ORs) with 95% confidence intervals (CIs) and P-values were reported for selected patient characteristics, including age category, sex, race, ethnicity, and diabetes as primary cause of ESRD, body mass index (BMI), RAASi prescription, year of first dialysis at the DOPPS facility, dialysis session length, dialysate K + bath concentration, albumin level, and comorbidities. For multivariable regression analysis, the first K + value that met the definition of HK was included in the analysis as the outcome of interest, and the reference level was the first K + value that did not meet the definition of HK over 1 year among patients who never met the definition of HK. The unit of analysis was the distinct combination of patient and facility. The regression models took into account facility clustering. All statistical analyses were carried out using SAS version 9.4. A p value of 0.05 was considered to indicate statistical significance. Patients Between 2018 and 2019, 19,805 patients in the US undergoing in-center thrice-weekly HD were initially enrolled in DOPPS and 9347 patients were included in this analysis ( Figure 2). Of these patients, 58% were male, 49% were aged >66 years, and 56% first underwent HD at the DOPPS facility before 2018 ( Table 1). The most common comorbidities were hypertension (80%), diabetes (64%), and hyperlipidemia (62%); diabetes was the primary cause of ESRD in 41% of patients. A low proportion of patients (2.8%) were prescribed an oral K + binder therapy. Prevalence and recurrence of predialysis HK The prevalence of any predialysis HK within 1 year of DOPPS enrollment was 74%, when HK was defined as the first monthly K + >5.0 mEq/L value on any day ( Figure 3). Any predialysis HK occurred in 38% of patients within 1 month and 52% within 3 months of the index date ( Figure 3). The median (IQR) time from index date to the first K + >5.0 mEq/L value was 60 (24-144) days (Table 2). Of the laboratory values used to define any HK (K + >5.0 mEq/L), the mean AE SD K + concentration was 5.4 AE 0.36 mEq/L. Moderate-to-severe predialysis HK occurred in 15% of patients within 1 month, 24% within 3 months, and 43% within 1 year of the index date and severe HK in 4%, 8%, and 17% of patients, respectively ( Figure 3). The median (IQR) time from index date to first laboratory K + value >5.5 mEq/L was 109 (44-205) days and to first laboratory K + value >6.0 mEq/L was 142 (61-241) days ( Table 2). The mean K + concentration was 5.9 AE 0.33 mEq/L for moderate-to-severe HK and 6.4 AE 0.34 mEq/L for severe HK. Among patients with any predialysis HK (K + >5.0 mEq/ L), the proportion of patients with moderate (K + >5.5 to ≤6.0 mEq/L) or severe (K + >6.0 mEq/L) predialysis HK increased over time ( Figure S1). Among those with any predialysis HK within 1 month (n = 3531), 27% had moderate HK and 11% had severe HK. Of the patients who had predialysis HK within 3 months (n = 4861), 31% had moderate HK and 15% had severe HK, and of those who In the exploratory analysis, recurrent predialysis HK with a second laboratory K + value >5.0 mEq/L occurred in 60% of patients; 48% and 39% of patients had a third and fourth laboratory K + value >5.0 mEq/L (Figure 4). A second laboratory K + value meeting the definition of moderate-to-severe (K + >5.5 mEq/L) and severe (K + >6.0 mEq/L) HK was observed in 25% and 7% of patients, respectively. The annual prevalence of any predialysis HK, moderate-to-severe HK, and severe HK showed limited variation when using the first K + laboratory values from the first, second, or third dialysis session of the week ( Figure 5). Since blood sampling on Sundays was less common, predialysis prevalence of HK on Sundays is not shown. The prevalence of any predialysis HK was high in patients with or without RAASi therapy (77.8% vs. 72.5%). Of those patients who were receiving a RAASi (n = 688), 502 patients (73.0%) initiated RAASi therapy after developing HK. Patients receiving an oral K + binder had a higher prevalence of predialysis HK compared with those not receiving a K + binder (92.8% vs. 73.4%). Of those receiving a K + binder (n = 263), the majority (n = 223; 84.8%) initiated K + binder therapy after developing HK. DISCUSSION In this study of patients on in-center HD in the US, the prevalence of predialysis HK was high, with serum K + concentrations of >5.0, >5.5, and >6.0 mEq/L reported in 74%, 43%, and 17% of patients, respectively, within 1 year, 52%, 24%, and 8% of patients, respectively, within 3 months, and 38%, 15%, and 4% of patients, respectively, within 1 month. The recurrence of predialysis HK was also high; 60% of patients had a second laboratory K + value of >5.0 mEq/L. In contrast with the high prevalence and recurrence of predialysis HK, a low proportion of patients (<3%) were prescribed oral K + binder therapy. Moreover, among patients with any predialysis HK (K + >5.0 mEq/L), the proportion of patients with moderate (K + >5.5 to ≤6.0 mEq/L) or severe (K + >6.0 mEq/L) predialysis HK increased over time, occurring in a total of 38% of patients within 1 month, 46% within 3 months, and 58% within 1 year. The data illustrated in Figures 3-5 show that the prevalence of predialysis HK increased over time (i.e., over 1 month, 3 months, and 1 year), and that a significant proportion of HD patients experience recurrent predialysis HK, despite long-term dialysis therapy. This suggests that many patients on HD may not be receiving optimal therapy for hyperkalemia, and that long-term oral K + binder treatment may improve K + homeostasis in this patient population. Although the degree of HK severity is associated with increased mortality risk, 1 further studies are needed to determine if recurrent predialysis HK also increases the risk of mortality in HD patients. Our results also show that the prevalence of predialysis HK remained high irrespective of whether the first K + laboratory values used to determine HK were collected before the first, second, or third dialysis session of the week. The high prevalence of predialysis HK in this study is higher than that reported on the publicly available DOPPS Practice Monitor website, where the weighted prevalence of serum K + ≥5.0 mEq/L in August 2020, based on the most recent single monthly predialysis serum K + value, was 38.8%. 12 Previous retrospective database studies of US patients on dialysis have also reported a lower prevalence of HK than our study with an HK prevalence of 33% (when defined as K + ≥5.0 mEq/L) among HD patients, 11 an annual prevalence among dialysis patients of 43.5% in 2014 (K + >5.0 mEq/L), 2 and ranging from 50.2 to 52.8% between 2010 and 2014 (K + ≥5.0 mEq/L). 9 Some studies that used higher K + concentration thresholds reported a much lower prevalence of HK, including a previous international DOPPS analysis in which the prevalence of HK (defined as K + >6.0 mEq/L) was 6.3% in the US and 20.0% in Europe. 10 In addition to the varying K + concentration thresholds used to define HK, differences in observed HK prevalence between studies may be due to differences in methods used to determine HK. For example, using one laboratory K + value (as in this study) may increase the annual prevalence estimate compared with using two laboratory values. 2 Our DOPPS study, consistent with this theory, reported higher annual prevalence of K + >5.0 mEq/L when using one laboratory K + value (73.9%) than a previous US retrospective study that used at least two laboratory values (43.5%), although the latter estimate also included patients on peritoneal dialysis. 2 The prevalence of predialysis HK in our study was >70% regardless of RAASi use, but the odds of any predialysis HK were significantly higher in patients with concomitant RAASi use than in those not on RAASi therapy, and appeared to be higher among patients on an oral K + binder therapy than in those not on K + binder therapy. However, these results should be interpreted with caution as the higher HK prevalence in patients on K + binder therapy may simply indicate that the medication is often initiated after HK develops, rather than meaning that K + binder therapy did not work or led to an increased prevalence of HK. In contrast to the low rate of oral K + binder use observed in our study, a recent French registry study reported a prescribing rate for K + binders of 37% among patients who initiated dialysis between 2010 and 2013. 17 However, a previous DOPPS study showed that use of K + binders varies widely between different countries, with prescription rates of 42% in France, 25% in Sweden, 14% in Belgium, 13% in Italy, and 5% in Canada. 18 The low K + binder prescription rate in Canada is consistent with that observed in our study. In our study population, age ≤80 years, female sex, Hispanic ethnicity, and concomitant RAASi medication were associated with higher odds of predialysis HK, whereas Black race, recent dialysis initiation, obesity, and dialysate K + bath concentrations ≥3 mEq/L were associated with lower odds of predialysis HK. Multiple factors contribute to determining serum K + in patients on HD, including dietary K + intake, certain medications (e.g., RAASis, nonsteroidal anti-inflammatory drugs, K + -sparing diuretics, and digoxin), dialysate K + bath concentrations, dialysis session length, and the effectiveness of K + removal. 3 One potential explanation for the lower odds of predialysis HK in patients with newly initiated dialysis may be the presence of residual kidney function, which declines over time in patients on maintenance HD. 19 Other possible explanations include stringent dietary K + and/or protein restriction as well as decreased prevalence of RAASi prescriptions and increased diuretic prescribing (leading to kaliuresis) in patients with stage 5 CKD. 20 Our study also showed that obesity was associated with lower odds of predialysis HK. This finding is consistent with a previous registry study of patients with CKD, which found that low BMI (<18.5 kg/m 2 ) was associated with increased odds of HK (OR, 1.60; 95% CI, 1.23-2.08) and high BMI (>30 kg/m 2 ) with lower odds of HK (OR, 0.77; 95% CI, 0.70-0.85). 21 One explanation of the lower risk of HK in patients with obesity may be due to the increase in hyperaldosteronism among patients with high BMI, which leads to an increase in K + excretion. 22 In our study, a dialysate K + bath concentration ≥ 3 mEq/L was associated with lower odds of predialysis HK. Similarly, a previous DOPPS study found that after F I G U R E 5 Annual prevalence of predialysis HK based on first monthly K + value >5.0, >5.5, and >6.0 mEq/L from any day of the week and on the first, second, and third HD session. *Includes patients with Sunday blood sampling. HD, hemodialysis; HK, hyperkalemia; K + , potassium [Color figure can be viewed at wileyonlinelibrary.com] adjusting for confounding variables, there was an inverse relationship between dialysate K + bath concentrations and predialysis serum K + levels, with a change of À0.25 (95% CI, À0.26 to À0.24) mEq/L in serum K + per 1-mEq/L increase in dialysate K + bath concentration. 23 However, this effect is almost certainly confounded by indication (i.e., patients with lower predialysis serum K + are prescribed higher dialysate K + bath concentrations), as an instrumental variable analysis to account for this bias in the previous study showed a minimal effect of dialysate K + concentration on serum K + of +0.09 (95% CI, 0.05-0.14) mEq/L per 1-mEq/L increase in dialysate K + concentration. 23 Our study found that younger age was associated with increased odds of predialysis HK. This finding is consistent with a previous study of the prevalence of HK (K + ≥5.5 mEq/L) in HD patients, which reported a higher HK prevalence in patients aged 18-44 years versus those aged ≥75 years (18.7 vs. 12.6 events per 100 patientmonths). 1 Potential explanations for this observation include a more robust protein intake (most protein sources are also a good source of potassium) in younger patients compared with older patients; a recent US Renal Data System (USRDS) report showed a greater proportion of younger patients had normal serum albumin levels compared with older patients. 24 Female sex was also associated with increased odds of predialysis HK compared with male sex; this difference has yet to be explained, other than possible variations in dietary K + intake. Our study did not evaluate hyperkalemia hospitalizations; however, USRDS data showed that in patients with stage 5 CKD, the risk of hospitalization for hyperkalemia was greater in patients aged 70-84 years compared with those aged ≥85 years, and interestingly, in patients with stage 5 CKD, the rate of hospitalization for hyperkalemia was 5.8/1000 person-years in females versus 7.0/1000 person-years in males. 25 Given these differences in patient factors associated with predialysis HK, further long-term studies with larger populations are needed to confirm the risk factors for predialysis HK in patients on HD. In a previous study of patients on HD, HK was more likely to occur the day after the long interdialytic interval F I G U R E 6 Forest plot of patient factors associated with annual prevalence of any predialysis HK (first laboratory K + >5.0 mEq/L value). BMI, body mass index; CI, confidence interval; DOPPS, Dialysis Outcomes and Practice Patterns Study; ESRD, end-stage renal disease; HK, hyperkalemia; K + , potassium; RAASi, reninangiotensin-aldosterone system inhibitor versus the day after the short interdialytic interval. 1 In the general population, HK prevalence was consistently higher among individuals aged ≥65 years versus <65 years within the same comorbidity subgroup, including patients on dialysis, indicating that age is an independent risk factor for HK. 2 A previous US study of patients on maintenance HD showed baseline serum K + concentrations were significantly higher in patients with Hispanic versus non-Hispanic ethnicity, and significantly lower in Black versus non-Black patients. 26 Compared with White patients, the likelihood of developing HK was higher in those with Hispanic ethnicity (OR, 1.32; 95% CI, 1.25-1.39) and lower in Black patients (OR, 0.58; 95% CI, 0.55-0.62). The reasons for these ethnic/racial differences in serum K + concentrations and HK prevalence in HD patients are unclear, but may be related to differences in diet between groups. 26 In previous studies of HD patients, factors associated with lower odds of HK included plasma sodium level and diuretic use. 27,28 The DOPPS has advantages over other data sources; it provides a more nationally representative sample of HD patients from dialysis centers than a single large dialysis organization, and includes more detailed data than most registries and administrative databases. 29 However, the use of DOPPS data is associated with some limitations. Due to the retrospective, observational nature of this analysis, confounding variables are possible, and any causal associations between HK and patient factors cannot be determined. The study was limited to HD patients with available serum K + values in the DOPPS database, and therefore may not be generalizable to all US patients on HD or to those without available laboratory data. In addition, as laboratory values are not measured centrally, nonstandardized testing may have introduced measurement errors 29 and laboratory testing may be driven by the presence of HK symptoms, thereby leading to an overestimate of HK prevalence (i.e., selection bias). However, as serum K + measurement is a component of the routine laboratory panel, this bias is likely to be small. Lastly, our study did not evaluate the association between interdialytic interval length and predialysis HK since the dialysis schedule and interdialytic interval duration of patients was not collected within DOPPS. CONCLUSIONS The prevalence and recurrence of predialysis HK (K + >5.0 mEq/L) was high among US patients on HD, whereas the proportion of patients who were prescribed a K + binder was low. Even if all K + binder prescriptions were for patients with predialysis HK, the low rate of K + binder prescribing suggests patients may be under-treated with oral binders during nondialysis days, although further data from outcomes studies are needed to confirm this; any such treatment pattern is likely to be episodic given the high recurrence rates. A higher annual prevalence of predialysis HK was observed in patients aged ≤80 years, females, patients with Hispanic ethnicity, and those on concomitant RAASi medications. Further studies are needed to understand the impact of additional factors, such as dialysate K + bath concentrations, on predialysis HK prevalence in HD patients.
2022-01-18T06:17:22.759Z
2022-01-17T00:00:00.000
{ "year": 2022, "sha1": "2ce6b5777d7e049581eba590cdb30abf1c18df40", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "b1ae06beca4a27ba1df059ad77c2134410c37057", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220056902
pes2o/s2orc
v3-fos-license
The Biomechanical Basis of Biased Epithelial Tube Elongation During lung development, epithelial branches expand preferentially in longitudinal direction. This bias in outgrowth has been linked to a bias in cell shape and in the cell division plane. How such bias arises is unknown. Here, we show that biased epithelial outgrowth occurs independent of the surrounding mesenchyme. Biased outgrowth is also not the consequence of a growth factor gradient, as biased outgrowth is obtained with uniform growth factor cultures, and in the presence of the FGFR inhibitor SU5402. Furthermore, we note that epithelial tubes are largely closed during early lung and kidney development. By simulating the reported fluid flow inside segmented narrow epithelial tubes, we show that the shear stress levels on the apical surface are sufficient to explain the reported bias in cell shape and outgrowth. We use a cell-based vertex model to confirm that apical shear forces, unlike constricting forces, can give rise to both the observed bias in cell shapes and tube elongation. We conclude that shear stress may be a more general driver of biased tube elongation beyond its established role in angiogenesis. 35 Epithelial tubes are an essential component of many organs. During development, epithelial tubes 36 elongate (Fig. 1A). Tube elongation can be either isotropic or anisotropic, i.e. the tubes either lengthen as 37 much as they widen, or there is a bias in outgrowth (Fig. 1B). Growth is by default isotropic, and a bias in 38 elongation can therefore only arise if growth symmetry is broken in the epithelium. How this symmetry 39 break is achieved is largely elusive. We will focus here on the mouse embryonic lung and kidney. In the 40 mouse lung, epithelial tube expansion is anisotropic initially (E10.5-E11.5), but, at least in the trachea, 41 becomes isotropic at later stages (from E12.5) (Kishimoto et al., 2018;Tang et al., 2011; 42 2018). The biased outgrowth has been related to a bias in the orientation of the mitotic spindles of 43 dividing cells (Saburi et al., 2008;Tang et al., 2011;Tang et al., 2018;Yates et al., 2010). According to 44 Hertwig's rule (Hertwig, 1884), cells divide through their mass point and perpendicular to their longest 45 axis. Indeed, the bias in cell division is accompanied by a bias in cell shape 46 2018). The planar cell polarity (PCP) pathway plays an important role in regulating the mitotic spindle 47 angle distribution in many organs, including the embryonic renal tubes (Ciruna et al., 2006;Gong et al., 48 2004;Saburi et al., 2008), though no such involvement could be ascertained for the early stages of lung 49 development . Independent of whether the PCP pathway is involved, it remains an 50 open question how the elongation bias and its direction arise in the first place. 52 In principle, a bias in outgrowth could originate from polarization along the tube, from a pulling force at 53 the tip, or from a mechanical constraint that limits expansion in the circumferential direction. Several 54 signalling pathways are known to affect the bias in lung tube elongation. Thus, hyperactive KRas 55 (KRasG12D) in the lung epithelium abrogates the bias in outgrowth during lung branching morphogenesis 56 , and pharmacological reagents that activate or inhibit fibroblastic growth factor (FGF) 57 signalling, sonic hedgehog (SHH) signalling, or L-type Ca2+ channels affect the width of cultured lung 58 buds (Goodwin et al., 2019). FGF10 and Glial cell line-derived neurotrophic factor (GDNF) signalling 59 are necessary for the formation of branches in the lung and kidney, respectively (Michos et al., 2010;Min 60 et al., 1998;Moore et al., 1996;Pichel et al., 1996;Rozen et al., 2009;Sanchez et al., 1996;Sekine et al., 61 1999). FGF10 has been proposed to act as chemoattractant because it is secreted from the submesothelial 62 mesenchyme, and isolated lung epithelia grow towards an FGF10 source (Park et al., 1998). However, 63 Gdnf is expressed uniformly in the ureteric cap mesenchyme (Hellmich et al., 1996), and branching 64 morphogenesis is still observed when Fgf10 is expressed uniformly in the lung mesenchyme (Volckaert et In this paper, we sought to systematically analyse the minimal requirements for biased epithelial tube 89 elongation. To this end, we cultured mouse embryonic lungs and kidneys under different conditions and 90 quantified the length and width of the branches for up to 60h. We show that the mesenchyme is not 91 necessary as biased elongating outgrowth is still observed when epithelial buds are cultured on their own, 92 in the absence of mesenchyme, with uniformly dispersed growth factors. Furthermore, we show that 93 while ERK signalling concentrates at the tip of branching isolated epithelial tubes, there is no evidence 94 for the formation of actin-rich protrusions at the epithelial tips that could guide the biased elongating 95 outgrowth. In early lung and kidney development, epithelial tubes only have a narrow luminal space, and 96 tubular cross-sections are often elliptical rather than round. Despite the non-uniform curvature of such 97 closed tubes, tension, as monitored with actin staining, remains uniform in the epithelium. We show that 98 the predicted shear stress level in the narrow embryonic tubes is within the range that cells can, in 99 principle, sense, and a cell-based model confirms that a tangential apical force, as provided by shear 100 stress, can result in the reported bias in cell shape and elongating outgrowth. 101 102 103 104 106 Biased epithelial lung tube elongation 108 Given the reports that the trachea switches from anisotropic to isotropic expansion around E12.5 109 (Kishimoto et al., 2018), we sought to measure the length and circumference of the bronchus of the left 110 lobe (LL) between E10.5 and E14.5. For this, we used the ShhGC/+; ROSAmT/mG transgenic mouse line, 111 which expresses green fluorescent protein (GFP) in the cell membrane of the lung epithelium ( Fig. 1A, C, 112 D). Here, we averaged the circumference over the entire 3D bronchus, except for the parts where side 113 branches form (Fig. S1). We confirm the previously reported 2-fold stronger longitudinal than 114 circumferential expansion between E10.5 and E11.5 , and find that, much as in the 115 trachea, there is a switch to isotropic growth at later stages, though a day later (E13.5) than in the trachea. 116 The substantial widening of the bronchus thus occurs after the emergence of cartilage and smooth 117 muscles (Hines et al., 2013;Schittny et al., 2000). Between E11.5 and E13.5, the bronchus still lengthens 118 more than it widens, even though the overall rate of growth declines (Fig. 1E). 120 Each 3D length measurement in Fig. 1D,E comes from a different embryo, and we notice a certain level 121 of variability between the specimen. Part of the differences can be accounted to differences in 122 developmental progress, that is observed even in embryos from the same litter. To establish a reliable 123 time line of the growth process, we cultured E11.5 embryonic lungs for 48h on a filter and measured the 124 lengths and average diameter of the branches (Fig. 1F, Fig. S1). Given the development on a filter, there 125 are differences in the branch angles, and much as for the 3D specimens, there is considerable variability 126 between lungs. Nonetheless, in all specimens, we observe a similar biased expansion of the left bronchus 127 ( Fig. 1D,E, grey) as in the serially isolated embryonic lungs (Fig. 1D,E, green). The cultured lungs 128 elongate slightly less than in the embryo, and there is less of a reduction in the branch width, though this 129 difference may reflect differences in the analysis. The width in the 2D cultures was averaged along the 130 entire branch (Fig. 1F), while the averaged circumference of the 3D specimen excluded the parts where 131 branches emerge (Fig. S1). Overall, the cultured lungs recapitulate the growth process in the embryo very 132 well, and we will therefore use these to analyse the mechanisms that drives elongating outgrowth. 134 135 Mesenchyme is not required for biased epithelial tube elongation during lung and kidney 136 development 138 While smooth muscles have recently been shown to be dispensable for lung branching morphogenesis 139 (Young et al., 2020), the mesenchyme is well known to affect branch shapes (Blanc et al., 2012;Lin et al., 140 2003;Sakakura et al., 1976). We, therefore, sought to analyse the impact of the mesenchyme on biased 141 epithelial outgrowth. To this end, we cultured both control lungs and kidneys ( Fig. 2A, C, Video S1, S2) 142 as well as explants, where the epithelium was enzymatically separated from the mesenchyme (Fig. 2E, G, 143 Video S3, S4). We used homogeneously distributed suitable growth factors in each case (Materials and 144 Methods), and analysed the three lateral domain branches in the left bronchus (LL1-LL3), and two 145 branches in the ureteric bud (UB). We find that these five branches all decrease in their average diameter 146 as they elongate ( Fig. 2A-D). As a result, the elongation bias is even more pronounced than for the left 147 lung bronchus. Both lung and ureteric buds still show biased elongating outgrowth when cultured in the 148 absence of mesenchyme ( Fig. 2F-H). This excludes a possible wall-like restrictive force, a pulling force, or 149 other polarity cues from the mesenchyme as a necessary driver of epithelial tube elongation. It also 150 confirms that smooth muscles are not necessary for biased elongating outgrowth. We note, however, that 151 in the case of the ureteric bud, the branches elongate less and remain wider in the absence of 152 mesenchyme. This shows that the mesenchyme impacts the elongation process, even though it is not 153 necessary for biased elongation. 155 156 Biased outgrowth is not the result of FGF signalling at the tip 158 Given that biased epithelial outgrowth occurs while isolated explants are exposed to uniform growth 159 factor concentrations (Fig. 2), a necessary polarity in the form of an external growth factor or morphogen 160 gradient can be ruled out. However, as branching is still observed with homogeneous distributions of 183 To explore the possible mechanical effects that can lead to the observed collapse of epithelial tubes, and 184 whether this could provide cues for biased tube outgrowth, we conducted continuum-mechanical finite 185 element simulations in a two-dimensional cross-section perpendicular to the tube axis (Fig. 5A). In our 186 numerical model, the tubular epithelial tissue was represented by an isotropic, linearly viscoelastic 187 continuum, neglecting the cellular structure of the tissue. The epithelial material properties were therefore 188 characterized by a Young modulus E and a Poisson ratio . As an initial condition, we chose a tubular 189 shape with uniform radius R (measured from the cylinder axis to the middle of the tissue), and the relative 190 tissue thickness was set to t/R=0.5. The epithelium was set to be intrinsically uncurved, such that a stress-191 free configuration would be a flat tissue. We used a custom-built finite element simulation framework 192 (Vetter et al., 2013) (Materials and Methods). 194 We considered three different collapse scenarios. In the first, a uniform net pressure difference P was 195 applied, corresponding to either a pressure drop in the lumen or an increased pressure exerted onto the 196 epithelium by the external environment. The pressure was increased until the critical point of collapse was 197 surpassed. In the second scenario, the epithelial tube was pinched by two rigid parallel clamps slowly 198 approaching one another, mimicking external spatial constraints imposed by a stiff surrounding medium. 199 In the third scenario, the enclosed lumen volume V was controlled with a Lagrange multiplier and 6 drained over time until the tube was sufficiently collapsed. Figure 5B shows the equilibrated simulation 201 results for each of the three scenarios. In all cases, both the hoop stress and curvature profiles along the 202 tissue midline are highly nonuniform. Hoop stress is localized almost exclusively in the two extremal 203 points with large curvature. We conclude that, given their non-uniform distribution in the tube cross-204 section, the stress and curvature patterns that arise from the deformation cannot serve as cues for 205 uniform biased outgrowth. 207 Stresses can be relaxed rapidly in tissues. We tested experimentally whether the most curved parts remain 208 under increased tension. We find that antibody staining for actin, a read-out for tension in a tissue, is 209 uniform in the closed lung tubes, indicating that there is no increased tension in the curved parts ( Fig. 210 5C,D). It remains possible that a wall-like constraint, in combination with rapid stress relaxation, enforces 211 the elliptic shape and elongating outgrowth. However, it is unclear how such an outer wall-like 212 constricting force would arise even in the absence of mesenchyme. Moreover, we have shown before that 213 a constricting force that results in the observed biased epithelial outgrowth in a cell-based model is 214 insufficient to generate the observed bias in cell shape and cell division (Stopka et al., 2019). 215 Consequently, the mechanical constraints explored here are unlikely to drive the biased elongating 216 outgrowth of embryonic lung tubes. 218 219 Shear stress in the developing lung 221 The developing lung epithelium secretes fluid into the luminal space (George et al., 2015;Nelson et al., 238 Cells sense shear stress with their cilium (Weinbaum et al., 2011). Epithelial kidney cells are particularly 239 sensitive, with renal collecting duct chief cells responding to apical shear stress as low as 6.8•10-4 Pa 240 (Resnick and Hopfer, 2007), and cultured epithelial kidney epithelial cells responding to 0.075 Pa, but not 251 The pressure gradient in the lumen could also, in principle, impact on tube width directly through a fluid-252 structure interaction (FSI). To obtain a flow from the bud tips to the opening of the trachea, the fluid 253 pressure must be highest at the tip and smallest at the tracheal opening (Fig. 6D). In case of a FSI, the 254 shape of the branches would then depend on the local fluid pressure, and buds should be wider than 255 stalks. While stalks are indeed thinner than buds, there is no direct dependency of branch width on the distance from the tracheal opening ( Fig. 6E, Fig. S4, Video S7). A simple way to modulate the pressure at 257 the tips is by altering the distance between the tips and the outlet by culturing lungs either with or without 258 their trachea (Fig. 6E). Removal of the trachea shortens the distance to the outlet, and thus, in case of a 259 constant pressure gradient and flow rate, reduces the pressure difference between the tips and the outlet. 260 We find, however, that a removal of the trachea neither impacts branching morphogenesis nor tip shapes 261 ( Fig. 6E,F), which rules out a significant mechanical impact of the fluid pressure on the surrounding 262 epithelium. 264 265 Shear stress forces can result in the observed bias in cell shape and outgrowth 267 In a final step, we investigated whether shear stress, which results in a biased force in the longitudinal 268 direction ( 278 Shear stress does not directly deform cells, but rather cells sense shear stress via their primary cilium and 279 actively respond with cell shape changes (Galbraith et al., 1998;Weinbaum et al., 2011). However, also 280 this indirect shape change corresponds to a force that the cells generate intracellularly. Accordingly, we 281 represent the effect of shear stress by applying a constant force at the top and bottom of the simulated 282 tissue (Fig. 7A). This results in a uniform force field with uniform relative displacement of cells along the 283 tissue axis (Fig. 7E), as would be expected in case of shear stress. When we apply uniform growth, we 284 find an almost linear increase of the bias in outgrowth with such an external force (Fig. 7E,F). As shear 285 stress is actively sensed and translated by the cells into a change in cell shape (Galbraith et al., 1998), we 286 note, however, that there is not necessarily a linear correspondence between the extracellular shear force 287 and the intracellular force that reshapes the cell. Other force response curves could therefore result from 288 the intracellular regulatory processes that respond to the shear stress. 290 Between E10.5 and E11.5, the lung tubes elongate twice more than they widen ( Fig. 1E) (Tang et al., 291 2011). We obtain this 2-fold bias in outgrowth with an elongation force of 1 a.u. (Fig. 7F). It has 292 previously been noted that this bias in outgrowth is accompanied by a bias in cell shape and cell division 293 Tang et al., 2018). Cell shape and the cell division axis are linked in that cells in the lung 294 epithelium divide perpendicular to their longest axis when their aspect ratios are greater than 1.53 (Tang 295 et al., 2018). With a force of 1 a.u. and cell division perpendicular to the longest axis, the simulations ( 309 The elongation of epithelial tubes is a key developmental process. We combined a quantitative analysis of 310 lung and kidney branching morphogenesis with computational modelling to evaluate candidate 311 mechanisms for the biased elongation of epithelial tubes. We show that biased elongation is an inherent 312 property of these epithelial tubes, and that it does not require contact with the mesenchyme or an 313 external chemotactic gradient. We note that the epithelial tubes are largely collapsed in early lung and 314 kidney development, and find that the fluid flow that has previously been estimated for early lung 315 development (George et al., 2015) could result in shear stress levels that epithelial cells can, in principle, 316 sense with their primary cilium (Weinbaum et al., 2011). We evaluate the impact of shear stress in a cell-317 based tissue model, and find that shear stress, unlike constricting forces (Stopka et al., 2019), can explain 318 both the observed biased tube elongation and the observed bias in cell division. Shear stress may thus be 319 a more general driver of biased tube elongation beyond its established role in angiogenesis (Davies, 2009; 320 Galbraith et al., 1998;Galie et al., 2014). 322 Consistent with a role for shear stress in biased lung tube elongation, the bias in cell division and . The cilium is necessary to respond to shear stress because shear stress does not directly deform 326 cells, but rather cells sense shear stress via their primary cilium and actively respond with cell shape 327 changes (Galbraith et al., 1998;Weinbaum et al., 2011). Cells then divide perpendicular to their longest 328 axis (Hertwig, 1884), and the bias in cell shape along the lung tube axis therefore translates into a bias in 329 cell division Tang et al., 2018). 331 Given that shear stress is actively sensed and translated by the cells into a change in cell shape ( 347 KRas has previously also been linked to changes in cell shape and motility in airway epithelial cells by 348 affecting cortical actin (Fotiadou et al., 2007;Okudela et al., 2009), and the KRasG12D mutation has been 349 found to upregulate multiple ECM components in the pancreatic stroma (Tape et al., 2016). 351 While biased tube elongation is observed in isolation, independent of the mesenchyme, we find that the 366 Shear stress only has the potential to drive biased elongating outgrowth because the epithelial tubes are so 367 narrow in early lung and kidney development. In later stages, tubes are wide and open. The same level of 368 apical shear stress would then require much higher flow rates, and tube growth indeed becomes isotropic. 369 It remains unclear why the tubes collapse in early developmental stages. Mechanical effects that could, in 370 principle, cause the collapse of the tubes would result in the highest mechanical stress levels in the curved 371 parts. Accordingly, neither curvature nor hoop stress (Hamant et al., 2008) could explain the biased 372 uniform outgrowth of the collapsed tubes. We note that staining for actin, a read-out for tension within 373 tissues, is uniform in the closed tubes, suggesting that any stress that may have been generated during the 374 collapse is quickly relaxed away. Going forward it will be important to identify the cause for tube collapse 375 and understand its potential impact in biasing elongating outgrowth. 377 Similarly, measurements of the fluid velocity are required. In the mouse lung, fluid flow is well visible at 418 The Shh-cre allele was used to drive Cre recombinase-mediated recombination of the ROSAmT/mG allele. 419 As recombined EGFP localizes to the cell membrane, and Shh is only expressed in the lung bud epithelium, 420 individual cell morphology could be segmented. 471 Immunofluorescence, optical clearing and light-sheet imaging 483 Whole-mount tissue clearing of dissected embryonic explants was performed with the Clear Unobstructed 502 In the meantime, LMP hollow agarose cylinders were prepared according to (Udan et al., 2014). Hollow 503 cylinders allow for unencumbered 3D embryonic growth, minimize tissue drift, enable imaging from 504 multiple orientations, and allow for better nutrients and gas perfusion. Within a hollow cylinder, a single 505 specimen was suspended in undiluted Matrigel (VWR International GmbH; 734-1101) to recapitulate the 506 in-vivo microenvironment. All cylinders were kept at 37ºC with 5% CO2 in culture media for 1h prior to 507 mounting. 508 For an overnight culture, the imaging chamber was prepared first by sonication at 80ºC and subsequent 509 washes in ethanol and sterile PBS. After the chamber was assembled, culture medium and allowed to 510 equilibrate at 37ºC with 5% CO2 for at least 2h before a cylinder was mounted for imaging. Furthermore, 511 to compensate for evaporation over time and maintain a fresh culture media environment, peristaltic pumps 512 were installed to supply 0.4 ml and extract 0.2 ml of culture medium per hour. Each lung explant was then 513 aligned with the focal plane within the centre of a thin light-sheet to enable fine optical sectioning with 514 optimal lateral resolution. For this study, all samples were imaged using a 20x/1.0 Plan-APO water 515 immersion objective. 518 Light-sheet datasets were transferred to a remote storage server and processed in a remote workstation 519 (Intel Xeon CPU E5-2650 with 512 GB memory). Deconvolution via Huygens Professional software (SVI) 520 improved overall contrast and resolution while Fiji (ImageJ v1.52t) (Schindelin et al., 2012) was used for 521 accentuating cell membranes, enhancing local contrast, and removing background fluorescence. To extract 522 3D morphological measurements, the length was measured along the centre of Imaris 9.1.2 (Bitplane, South 523 Windsor, CT, USA) iso-surfaces, and cross-sections of tubular bronchial portions were masked and 524 exported into Fiji, where 2D circumference was calculated and averaged over the tube. 526 Segmentation and skeletonization of 2D culture datasets 527 Epifluorescence images of embryonic lung and kidney explants were processed in Fiji (ImageJ v1.52t) 528 (Schindelin et al., 2012). Before segmentation, local image contrast was increased, and image background 529 subtracted. Images were then binarized using a global thresholding method, and boundaries were 540 Continuum-mechanical simulations of epithelial tube collapse 541 A full technical description of the custom finite element simulation framework that we employed to 542 simulate epithelial tube collapse can be found in (Vetter et al., 2013). In brief, the shape of the epithelium , and its mass density was set to that of water (1000 562 kg/m3). A no-slip condition was assumed at the interface between the fluid and the surface of the 563 epithelium. To generate fluid motion, the flow rate at the inlet was set to 420 µm3/s (George et al., 2015), 564 and the pressure at the outlet was maintained at 1 atm. The average wall shear stress value on the apical 565 surface was measured with a boundary probe. 567 Cell-based simulations of shear stress effect 568 We simulated the growth of the lung epithelium subjected to shear stress using a vertex model available in 569 the Chaste framework (Fletcher et al., 2013;Mirams et al., 2013). The dynamics of the vertices were derived from the potential energy of the system, as previously proposed (Nagai and Honda, 2009 597 DI conceived the study. HG obtained the light-sheet microscopy data in Figures 1 and 4, and, together 598 with OM, the pERK staining in Figure 3B,C. MD obtained the 3D live imaging data in Figure 4, with 599 support from OM and HG. LC obtained and analysed the lung and kidney culture data in Figures 1 and 2
2020-06-25T09:09:14.290Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "7d73f847ce325e0a2ce0abe968540052a3fc433e", "oa_license": "CCBY", "oa_url": "https://www.research-collection.ethz.ch/bitstream/20.500.11850/483012/3/dev194209.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "2ed6c69081e3ea2e7aca0a0a8bafd9ee7dfe5141", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
243373292
pes2o/s2orc
v3-fos-license
Salmonella Infantis Delays the Death of Infected Epithelial Cells to Aggravate Bacterial Load by Intermittent Phosphorylation of Akt With SopB Salmonella Infantis has emerged as a major clinical pathogen causing gastroenteritis worldwide in recent years. As an intracellular pathogen, Salmonella has evolved to manipulate and benefit from the cell death signaling pathway. In this study, we discovered that S. Infantis inhibited apoptosis of infected Caco-2 cells by phosphorylating Akt. Notably, Akt phosphorylation was observed in a discontinuous manner: immediately 0.5 h after the invasion, then before peak cytosolic replication. Single-cell analysis revealed that the second phase was only induced by cytosolic hyper-replicating bacteria at 3–4 hpi. Next, Akt-mediated apoptosis inhibition was found to be initiated by Salmonella SopB. Furthermore, Akt phosphorylation increased mitochondrial localization of Bcl-2 to prevent Bax oligomerization on the mitochondrial membrane, maintaining the mitochondrial network homeostasis to resist apoptosis. In addition, S. Infantis induced pyroptosis, as evidenced by increased caspase-1 (p10) and GSDMS-N levels. In contrast, cells infected with the ΔSopB strain displayed faster but less severe pyroptosis and had less bacterial load. The results indicated that S. Infantis SopB–mediated Akt phosphorylation delayed pyroptosis, but aggravated its severity. The wild-type strain also caused more severe diarrhea and intestinal inflammatory damage than the ΔSopB strain in mice. These findings revealed that S. Infantis delayed the cells’ death by intermittent activation of Akt, allowing sufficient time for replication, thereby causing more severe inflammation. INTRODUCTION For decades, non-typhoidal salmonella (NTS) has been one of the most common foodborne zoonosis pathogens worldwide that cause host gastroenteritis. There are more than 2,600 known Salmonella enterica serovars, with Salmonella enterica serovar Infantis (S. Infantis) the third most prevalent serovar of human NTS infections in Europe (1,2). It is mainly transmitted through contaminated food, such as broiler chicken and pork (3)(4)(5). Worryingly, S. Infantis infection has been frequently reported in many countries recently, indicating that S. Infantis is an emerging pathogen causing gastroenteritis worldwide (6,7). Salmonella is a Gram-negative facultative intracellular pathogen that possesses two functionally distinct T3SSs (T3SS1 and T3SS2) encoded in Salmonella pathogenicity islands 1 and 2 (SPI1 and SPI2), respectively (8). In epithelial cells, approximately 10-30% of Salmonella can escape from the Salmonella-containing vacuole (SCV) to the cytoplasm after internalization and replicate there (9). Cytosolic Salmonella proliferates faster than SCV bacteria, a phenomenon known as hyper-replication (defined as >20 bacteria/cell) (10,11). Hyperreplicating Salmonella proliferates geometrically within several hours in host cells, causing cell death and extrusion and releasing invasive bacteria into the gastrointestinal tract (11). Salmonella appears to have evolved to benefit from host cell signaling pathways involved in regulating cell proliferation and death (12)(13)(14). Apoptosis is a highly conserved and generegulated physiological programmed cell death mechanism. An increasing body of evidence indicated that the pathogenic mechanism of bacteria involves the regulation of apoptosis. The manipulation of apoptosis by Salmonella depends on the type of host cell and the stage of infection. Multiple apoptotic pathways are found to be rapidly activated during Salmonella infection of macrophages (15)(16)(17). In contrast, the apoptosis of infected epithelial cells is inhibited by Salmonella (18)(19)(20). It is beneficial for Salmonella to prolong the lifespan of infected cells, enabling bacteria to gain sufficient time for intracellular replication. Salmonella then induces the assembly of inflammasomes when the intracellular bacterial load increases (11,13). Caspase-1 is subsequently activated, which converts gasdermin D (GSDMD) and the precursors of IL-1b and IL-18 to their active forms. The N-terminal fragment of GSDMD accumulates on the cell membrane, forming a polymeric pore and inducing pyroptosis, which results in the release of the intracellular bacteria and inflammatory cytokines, all of which contribute to inflammation (21)(22)(23). During an enteric infection, the induction of inflammation may be conducive to the spread of Salmonella in the gastrointestinal tract through the induction of rapid inflammatory pyroptosis. Salmonella can effectively escape from infected host cells, infect adjacent normal cells, and eliminate host immunocytes, leading to a weakened immune response (24). In the battle between the host and Salmonella, two pivotal biological processes that occur are apoptosis and pyroptosis. For Salmonella, the regulation of cell death is also dependent on the serotype. Interestingly, Salmonella Typhi can replicate in macrophages without inducing cytotoxicity, while Salmonella Typhimurium causes severe cytotoxicity in macrophages (25). Most studies have focused on the interaction between S. Typhimurium and macrophages or other phagocytes, and little is known about the role of programmed cell death in controlling the pathogenesis of S. Infantis in epithelial cells. Furthermore, intestinal epithelial cells represent the first point of contact for Salmonella with the host after invasion (20,26). In addition, there are significant differences in SPI-1 expression between S. Infantis and S. Typhimurium (27). In this study, we revealed that cytosolic S. Infantis phosphorylated Akt in a discontinuous manner through SopB to delay apoptosis and pyroptosis in infected Caco-2 cells. S. Infantis gained sufficient time to proliferate by prolonging the lifespan of infected cells, eventually causing pyroptosis, which was accompanied by the release of inflammatory factors and bacteria. This created favorable conditions for the spread and infection of S. Infantis. Reagents and Antibodies The reagents and antibodies used in the study are shown in Table 1. Bacterial Strains The S. Infantis wild-type strain CAU1508 was isolated from the intestinal contents of diarrhea piglets. S. Infantis with the pFPV-mCherry plasmid has been previously described (12). The SopB mutant strain was derived from the parental S. Infantis wild-type strain CAU1508 and constructed using the l-Red homologous recombination system. Host Cell Infection and Enumeration of Intracellular Bacteria Caco-2 cells were purchased from Kunming Cell Bank of Chinese Academy of Sciences. The Caco-2 cells were cultured in DMEM/High Glucose media supplemented with 10% FBS and 1% penicillin streptomycin at 37°C in a 5% CO 2 incubator. Cells were seeded in six-well (1×10 6 cells per well) or 24-well culture plates (1×10 5 cells per well) and infected when the cell density reached 60% (this is to ensure that bacteria can infect as many cells as possible). Salmonella was grown in LB medium overnight with shaking at 200 rpm and 37°C, then subcultured in 10 ml fresh LB medium (1:40) with shaking under the same conditions for 4 h. Following that, the bacteria were centrifuged at 4,000 g for 15 min at room temperature and resuspended in PBS. The entire infection was according to the experimental procedure of the gentamicin protection assay. The monolayers were infected at an MOI of~50 for 15 min and washed three times with PBS supplemented with gentamicin (100 mg/ml) to remove extracellular bacteria. Cells were then incubated in fresh growth medium containing gentamicin (100 mg/ml) for 2 h, followed by growth medium supplemented with gentamicin (10 mg/ml) until the infection was complete. After 15 min of Salmonella invasion, the time was 0 hpi (hours post-infection), and the infection lasted for 8 h in total. For groups that required treatment with MK2206 (1 mM) and SC79 (25 mM), both drugs were added 2 h before infection, and their concentrations remained unchanged in the medium until the end of the experiment. For the positive control of apoptosis, cells treated with CCCP (Carbonyl cyanide 3-chlorophenylhydrazone, an apoptosis inducer, 50 mM) for 1 h before other experiments were carried out (immunoblotting or immunofluorescence). Cell Viability Assay Cell viability was determined using the Cell Counting Kit-8. Cells were seeded in 96-well culture plates. At the end of each treatment, the medium was removed and replaced with 100 ml medium containing 10 ml fresh CCK-8 solution, then incubated at 37°C for 2 h. Following that, absorbance was measured at 450 nm. Experiments were performed six times on each group to ensure the authenticity of the results. Enumeration of Intracellular Bacteria In order to quantify viable intracellular bacteria, monolayers in six-well plates were washed three times with PBS containing gentamicin (100 mg/ml) for 5 min each time before being lysed in 1 ml of 0.3% (v/v) Triton X-100. Serial dilutions were plated on LB agar plates. For quantification of intracellular cytosolic bacteria, cells were co-incubated with media containing chloroquine (700 mM) for 1 h before being solubilized in 1 ml of 1% (v/v) Triton X-100, then plated on LB agar plates. Apoptosis Assay After Salmonella infection, Caco-2 cells were collected, washed twice with PBS, suspended in 100 ml 1× binding buffer, and stained with the Annexin V-PE/7-AAD Apoptosis Detection Kit. Next, 5 ml each of Annexin V-PE and 7-ADD were added to each sample and incubated in the dark for 15 min. Flow cytometry analysis was performed using a BD FACSVerse ™ Flow Cytometer. Calcein-AM/PI Assay Cells were seeded into 24-well plates. After 8 h of Salmonella infection, media was removed and the cells were washed with PBS. Following that, 2 ml of 1× assay buffer supplemented with Calcein-AM (2 mM) and PI (4.5 mM) was added to the cells and Western Blotting Total protein of Caco-2 cells was extracted using RIPA buffer (Solarbio, Beijing, China) containing a protease/phosphatase inhibitor cocktail (Cell Signaling Technology, USA) on ice for 30 min. Protein concentration was quantified using the BCA Protein Assay kit (23227, ThermoFisher Scientific). SDS-PAGE was used to separate protein samples, which were then transferred to polyvinylidene fluoride membranes. After incubation with 5% skim milk, the membranes were incubated with primary antibodies. Further details about the primary antibodies are described in the Table 2. Next, the membranes were incubated with secondary antibodies, then coated with ECL immunoblotting substrate. Images were captured using a Tanon 6200 chemiluminescence imaging workstation. Mitochondrial Network Morphology Assay Mitochondria were labeled with Tom-20 and imaged using confocal microscopy. The Mitochondrial Network Analysis (MiNA) toolset, which consists of a relatively simple pair of macros using existing ImageJ plug-ins, was used to analyze the mitochondrial networks. Animal Infection Experiment All animal work was performed in accordance with the Guidelines for Laboratory Animal Use and Care of the Chinese Center for Disease Control and Prevention and the Rules for Medical Laboratory Animals (1998) of the Chinese Ministry of Health. A total of 36 six-week-old male C57BL/6 mice were obtained from Charles River Laboratory Animal Technology Co., Ltd (Beijing, China). Mice were provided food and water ad libitum throughout the entire experiment. All mice were administered a single dose of streptomycin (15 mg per mouse) via gastric gavage before being infected via gavage 24 h later (2 × 10 6 Salmonella in 200 ml of PBS). Control mice were orally fed an equal volume of PBS. The mice were then euthanized, and their ileum tissues were harvested 3 days after infection. Mice feces were homogenized in 1 ml of PBS, and serial dilutions were plated on LB agar plates to quantify bacterial burdens. Assessment of Diarrhea Degree The severity of diarrhea was evaluated using the fecal score and the dry/wet weight of fecal pellets. The fecal scoring criteria were as follows: 1 (normal stool); 2 (slightly wet, soft stool and formed stool); 3 (wet and unformed stool with mucus); 4 (watery stool). In order to determine the fecal dry/wet weight ratio, mice were separately placed in a clean cage, without food or water. Next, 0.5 g of feces was collected and weighed. Following that, the feces were placed in a 60°C oven for 24 h until the weight change was less than 1% before being weighed. The dry/wet weight ratio was then calculated. Histopathologic Section In order to evaluate ileal pathology, the mid-segments of the ileum were excised, rinsed with saline, then fixed with 4% paraformaldehyde for 48 h. Paraffin-embedded tissue samples Real-Time Quantitative PCR For gene expression analysis, total RNA was extracted from ileal tissues using the Trizol reagent (Invitrogen, Carlsbad, CA, USA). RNA transcription was performed using the PrimeScriptTM RT Reagent Kit according to the manufacturer's instructions (RR047A, TaKaRa, Japan). Quantitative real-time RT-PCR was performed using the SYBR Green PCR Master Mix (LS2062, Promega, USA). The cycle threshold (CT) values of target genes were normalized to the CT value of hypoxanthine gene hypoxanthine phosphoribosyl-transferase. The results were presented as fold-change using the 2 −DDCT method. Primer sequences for PCR are listed in Table 3. Statistical Analysis All statistical analysis was performed using GraphPad Prism 7 with a one-way ANOVA or a t-test with Bonferroni correction. Data were presented as means ± SEM. P < 0.05 was considered statistically significant. S. Infantis Can Hyper-Replicate in Caco-2 Cells The curves showed that the bacterial load began to rapidly increase at 4 hpi and peaked at 8 hpi ( Figure 1A). We discovered two bacterial subpopulations in the cell by adding chloroquine (used to selectively kill vacuolar bacteria): one in the SCV and the other free in the cytoplasm. The bacterial load was mainly contributed by cytosolic bacteria ( Figure 1A). Next, we used mCherry-S. Infantis to infect the cells and LAMP-1 to label SCV for confirmation. Confocal microscopy images revealed that cytosolic S. Infantis was the absolute dominant subpopulation ( Figure 1B). S. Infantis Infection Does Not Induce Apoptosis of Caco-2 Cells The cell survival rate significantly decreased at 6 hpi and remained almost unchanged after 8 hpi ( Figure 1C). Therefore, we focused on the 0-8 hpi period, which encompasses the peak of cytosolic replication and the low phase of cell viability. The results showed that Caspase-9, Cleaved-caspase-3, and Cleaved-PARP were not activated during S. Infantis infection ( Figure 1D). Morphological analysis of apoptosis revealed that S. Infantis infection did not result in apoptotic characteristics of the nucleus (Figures 1B, E). These findings indicated that cytosolic S. Infantis hyperreplicated in Caco-2 cells without inducing apoptosis. Cytosolic S. Infantis Inhibits Apoptosis by Intermittently Phosphorylating Akt Akt regulates cell survival and suppresses apoptosis via phosphorylation at Thr308 and Ser473 sites (27)(28)(29). Several studies have found that Akt is constantly phosphorylated after S. Typhimurium invades epithelial cells (18,20). In this study, we hypothesized that continuous Akt phosphorylation contributes to the inhibition of apoptosis. As seen in Figure 2A, phosphorylation of Akt occurred in a discontinuous manner during S. Infantis infection. The first phase occurred at 0.5 hpi and rapidly decreased to near the background level. The second phase was observed at 3-4 hpi, with the expression of p-Akt higher than in the first phase. The levels of Cleaved-caspase-3 and Cleaved-PARP significantly increased after Akt phosphorylation was inhibited by MK2206, an Akt inhibitor that inhibits Akt phosphorylation at Thr 308 and Ser 473 ( Figures 2B, C). Next, we explored which parts of the bacteria induced Akt phosphorylation. At 0.5 hpi, Akt phosphorylation was observed in all infected cells ( Figure 2D). Images at 4 hpi revealed high levels of p-Akt in cells containing cytosolic hyper-replicating S. Infantis (bacteria number >20), which were not observed at 8 hpi ( Figure 2E). In infected cells without cytosolic bacteria, phosphorylation of p-Akt was not detected at 4 and 8 hpi ( Figure 2F). These results demonstrated that S. Infantis- induced Akt phosphorylation occurred in two distinct phases. The first phase is widely induced after the invasion and rapidly depleted within 30 min, whereas the second phase is only induced by cytosolic bacteria at 3-4 hpi. Both phases of Akt phosphorylation inhibited apoptosis of the infected cells. S. Infantis SopB Notably, the SPI1 effector SopB, which contributes to invasion and SCV maturation, has 4-phosphatase activity, which can induce Akt activation during S. Typhimurium infection (18,20). As expected, the p-Akt level was almost completely diminished in cells infected with the DSopB mutant ( Figures 3A, C). Interestingly, LY294002 (Ly, a pan Akt inhibitor) completely inhibited Akt phosphorylation, but there was a certain expression of p-Akt after Wortmannin (Wor, a PI3K/Akt inhibitor) treatment ( Figure 3B). Furthermore, infection with the SopB mutant induced apoptosis, as demonstrated by nuclear chromatin condensation and increase of Cleaved-caspase-3 and Cleaved-PARP levels ( Figures 3C, D). The SC79 (an Akt phosphorylation activator) was used to activate Akt ( Figure 3E), with findings confirming that SopBmediated Akt intermittent phosphorylation inhibited apoptosis of infected cells ( Figures 3F, G). Importantly, wild-type (WT) S. Infantis had enough time for intracellular replication, resulting in increased bacterial load by inhibiting apoptosis ( Figure 3H). SopB Mediated Akt Phosphorylation Inhibits Apoptosis by Maintaining Mitochondrial Dynamic Network Homeostasis The mitochondrion is the primary control organelle responsible for endogenous apoptosis. In normal cells, individual mitochondria connect to form tubules and shape dynamic networks through continuous division and fusion (30). Therefore, we evaluated the morphology of the infected cells' mitochondrial network. At 4 hpi, the mitochondria of WT S. Infantis-infected cells still maintained an abundant network structure, but infection with the SopB mutant disrupted the mitochondrial network ( Figure 4A). The mitochondrial cavity also appeared to be expanding, as evidenced by the ring-shaped structure (yellow arrows) ( Figure 4A). Morphological analysis of the mitochondrial network revealed that infection with the WT strain had no discernible effect on the dynamics of the mitochondrial network ( Figure 4B). The key events of mitochondria-mediated apoptosis are the opening of mitochondrial permeability transition pore (MPTP) and the release of cytochrome c (31,32). In the WT group, cytochrome c and mitochondria remained co-localized, while mk2206 treatment resulted in cytochrome c translocation from the mitochondria to the cytoplasm (Figures 4C, D). Infection with the SopB mutant resulted in massive cytochrome c release into cytoplasm, which was reversed by the addition of SC79, partially restoring the mitochondrial network ( Figures 4C, D). Interestingly, co-localization of cytochrome c and mitochondria was also restored by the addition of CSA (a MPTP blocker) ( Figure 4C). The mitochondrial membrane permeability may be affected by p-Akt induced by SopB, and the Bcl-2 family regulates the permeability of the mitochondrial outer membrane by inhibiting Bax translocation from the cytosol to the mitochondria (33,34). We extracted mitochondrial protein and detected the distribution of Bcl-2 and Bax. As shown in Figure 4D, WT S. Infantis infection significantly increased the distribution of Bcl-2 in the mitochondria, while the addition of mk2206 decreased Bcl-2 and increased the distribution of Bax in the mitochondria. Infection with the SopB mutant also resulted in the decrease of Bcl-2 and the increase of Bax in mitochondria, which could be reversed by adding SC79. In addition, WT S. Infantis infection could phosphorylate Bad and Caspase-9 ( Figure 4E), which enhanced the inhibition of apoptosis. In summary, S. Infantis-mediated Akt phosphorylation by SopB maintained mitochondrial dynamic network homeostasis, hence suppressing the apoptosis in infected cells. SopB-Mediated Akt Phosphorylation Delays Pyroptosis by Inhibiting Caspase-1 Flow cytometry results showed that the proportion of 7-ADD + / Annexin-V PE + cells significantly increased during 6-8 hpi, and the cells entered later stage of apoptosis without the early phase ( Figure 5A). This indicated that there was a change in the membrane permeability of infected cells. In order to validate this conjecture, we performed double staining with Calcein-AM and PI during infection with S. Infantis. Images revealed that a subset of cells in the S. Infantis infection group had been damaged, as evidenced by PI-positive staining ( Figure 5B). The scanning electron microscope images revealed that plasmalemma was destroyed and bacteria had been drilled out along the pores ( Figure 5C), suggesting the occurrence of pyroptosis. Next, the protein markers of pyroptosis, caspase-1 (p10), and GSDMD-N were examined. Infection with WT S. Infantis significantly activated pyroptosis ( Figure 5E). Surprisingly, the DSopB strain induced caspase-1 (p10) and GSDMD-N activation 2 h earlier than WT S. Infantis ( Figure 5F). However, the SopB mutant induced lower levels of caspase-1 (p10) and GSDMD-N compared to the WT strain, indicating a weaker degree of pyroptosis induced by the SopB mutant ( Figure 5G). A recent study reported that p-Akt suppressed inflammasome activation in Salmonella-infected macrophages (35). Pyroptosis can be regulated by S. Infantis through p-Akt. MK2206 significantly reduced the levels of caspase-1 (p10) and GSDMD-N induced by WT S. Infantis, while SC79 treatment caused the WT strain to display a similar regulation as the DSopB strain ( Figures 5H, I). Furthermore, SC79 treatment resulted in a significant decrease in caspase-1 (p10) and GSDMD-N levels during infection with the DSopB strain ( Figure 5J), indicating that Akt phosphorylation both delayed pyroptosis and aggravated the severity of pyroptosis of infected Caco-2 cells. Intracellular bacterial load detection revealed that the number of bacteria in cells infected with the WT strain was much higher than in cells infected with the DSopB strain ( Figure 5D). This may explain the two phenotypes of S. Infantis causing varying degrees of pyroptosis: the WT strain has a greater bacterial load and stimulates the inflammasome more strongly. WT S. Infantis Causes More Severe Intestinal Inflammatory Damage Than the DSopB Strain In order to verify our findings in vivo, we infected the C57BL/6 mouse model with Salmonella. The severity of diarrhea was determined by fecal score and the dry/wet weight of fecal pellets. The results revealed that WT S. Infantis caused more severe diarrhea than the DSopB strain ( Figures 6A, B). In addition, the fecal bacterial load in the WT group was also significantly higher than the DSopB group ( Figure 6C). Since Salmonella infection can cause severe ileal injury, the pathological changes in the ileum were examined. Infection with WT S. Infantis resulted in more severe ileal damage than infection with the DSopB strain ( Figures 6D, E). Consistent with the in vitro results, p-Akt (Ser473 and Thr308) was found to be highly expressed in the WT group ( Figure 6F). As shown in Figure 6G, infection with WT S. Infantis significantly increased the mRNA level of inflammatory factors, while the mRNA level of inflammatory factors induced by the DSopB strain was lower compared to WT S. Infantis. In combination with the detection of caspase-1 level ( Figure 6H), we discovered that the infection with WT strain led to more severe intestinal inflammatory injury. Furthermore, immunoblotting and the TUNEL fluorescence assay showed that the DSopB strain caused more severe apoptosis of intestinal cells than the WT stain ( Figures 6I, J). Intestinal cells infected with the DSopB strain may shed rapidly from the epithelium through apoptosis and be eliminated from the body, reducing the gut Salmonella load and inflammatory response. In conclusion, S. Infantis delayed the death of infected Caco-2 cells through intermittent activation of Akt mediated by SopB, allowing intracellular cytosolic bacteria sufficient time for replication and resulting in more severe intestinal inflammation. DISCUSSION Many pathogenic bacteria that are closely related to public health reproduce intracellularly, enhancing their virulence. Invasion and colonization in epithelial cells are crucial processes in Salmonella pathogenesis (36). The replication of cytosolic Salmonella is key to the early establishment of the infection (10,11). In this study, we elucidated the mechanism by which cytosolic S. Infantis delayed the death of infected epithelial cells via intermittent Akt phosphorylation mediated by SopB. Salmonella utilizes SopB to activate Akt, suggesting that it plays an important role in regulating host cell survival (18,19). However, the distribution of SopB-dependent Akt phosphorylation in epithelial cells remained unclear. A single-cell approach was used to evaluate the relationship between SopB and Akt phosphorylation. Because SopB was transported to cells to play a role in mediating both actin-dependent and myosin II-dependent bacterial invasion, we attributed the induction of the first wave of Akt phosphorylation to its residual activity. The second wave of Akt phosphorylation activity only occurred at 3-4 hpi, with Akt phosphorylation only strongly induced in infected cells containing hyper-replicating cytosolic bacteria. Notably, there was no Akt phosphorylation at any other time in all infected cells. We hypothesized that the second wave of Akt activation was due to the residual SPI-1 activity of bacteria escaping from the SCV. This was similar to Akt phosphorylation by S. Typhimurium: the first stage was widely induced during invasion, and the second stage was induced only in the infected cells containing cytosolic Salmonella (20). However, there were some differences between the two types of Salmonella: the first phase of Akt phosphorylation induced by S. Typhimurium was largely depleted by 3 hpi, whereas the first phase of Akt phosphorylation induced by S. Infantis induced was almost depleted at 1 hpi. The second stage of Akt activation induced by S. Infantis induction was occurred at 3-4 hpi, while S. Typhimurium-induced Akt activation was later at 6 hpi (20). Previous studies have shown that S. Infantis was less invasive and induced considerably weaker enteritis than S. Typhimurium (37). These differences were attributed to the low expression of SPI-1 in S. Infantis than that in S. Typhimurium (37). SopB delayed the inevitable and rapid apoptosis of intestinal epithelial cells. For Salmonella, the most obvious advantage gained by inhibiting apoptosis is time, which allows Salmonella to establish an intracellular stronghold to rapidly proliferate, as well as regulate its and the host's gene expression to prepare for the spread of infection in the gut. The mitochondrion is the primary organelle for endogenous apoptosis (30). In this study, we discovered that cytosolic S. Infantis phosphorylated Akt through SopB to (i) reposition Bcl-2 to regulate the permeability of the mitochondrial outer membrane by inhibiting Bax translocation from cytosol to the mitochondria; (ii) maintain mitochondrial dynamic network homeostasis; (iii) phosphorylate Bad and Caspase-9 to inhibit apoptosis of infected cells, thereby maintaining the mitochondrial membrane and network homeostasis to suppress the apoptosis of infected cells. Importantly, we discovered that SopB-mediated Akt phosphorylation delayed the pyroptosis of infected cells. Recently, it has been reported that SopB inhibited the activation of NLRC4 inflammasome in BMDMs through an Akt signal-dependent process (35). Another study found that SopB promoted YAP phosphorylation through Akt in B cells, thereby inhibiting the assembly of the inflammasome (38). The activation of the inflammasome is a crucial step in inducing of pyroptosis. Our findings demonstrated that SopB-mediated Akt phosphorylation also delayed the activation of caspase-1 and pyroptosis in Caco-2 cells. Compared to apoptosis, pyroptosis occurs faster and is accompanied by the release of proinflammatory factors, such as IL-1b and IL-18 (21)(22)(23). Inflammatory factors increase the number of inflammatory cells and aggravate the inflammatory response, which is a vital (n = 6). Data were presented as the mean ± SEM from three independent experiments (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001. CN, control. ns, no significant difference. defense mechanism for the host to combat pathogenic microorganism infection (39). Although the inflammatory response accelerates the elimination of pathogens, it also alters the intestinal environment by disrupting gut microbiota and causing a burst of intestinal electron acceptors (40,41). An alteration in the intestinal ecosystem provides variables for pathogens to gain a growth advantage to spread infection. In this study, S. Infantis rapidly induced the pyroptosis within 2 h after Akt phosphorylation diminished. The WT S. Infantis caused more severe intestinal inflammation in vivo than the SopB mutant ( Figure 6). We hypothesized that SopB contributes to the rapid proliferation of bacteria in the intestinal infection phase of S. Infantis, which leads to inflammatory response, intestinal damage, and other intestinal environmental variables, as well as provides favorable conditions for the subsequent establishment of long-term colonization infection in the gut. In conclusion, this study demonstrated that the S. Infantis SPI-1 effector SopB acts as a pro-survival factor in epithelial cells by inhibiting apoptosis and delaying pyroptosis. S. Infantis gained sufficient time to proliferate through the regulation of host cell death, which resulted in inflammation and suitable conditions for the spread and colonization of pathogens in the gut. As it is interesting that bacterial pathogens can manipulate host cell death mechanisms to enhance pathogen survival and spread infection, further studies focusing on how S. Infantis regulates different types of host cell death to cause enteritis will be performed. This study also provides a solid theoretical basis as well as potential drug targets for the treatment of salmonellosis. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by the Guidelines for Laboratory Animal Use and Care from the Chinese Center for Disease Control and Prevention and the Rules for Medical Laboratory Animals (1998) from the Chinese Ministry of Health, under the approval of the Animal Ethics Committee of the China Agricultural University. AUTHOR CONTRIBUTIONS B-XC designed and performed the experiments. Y-NL, N-L, and Y-HZ participated and assisted in the experiments as well as provided advice during the procedure. L-XY and S-YC contributed to data analysis. B-XC drafted the manuscript, and J-FW critically revised the manuscript. J-FW was responsible for obtaining funding and overseeing the project. All authors contributed to the article and approved the submitted version.
2021-11-05T13:25:03.450Z
2021-11-05T00:00:00.000
{ "year": 2021, "sha1": "3202f20f03e0500a7cdbf8a442f353909a7148d9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.757909/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3202f20f03e0500a7cdbf8a442f353909a7148d9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
237569099
pes2o/s2orc
v3-fos-license
SPMSM Sliding Mode Control Based on the New Super Twisting Algorithm To achieve the high-performance control of the surface-mounted permanent magnet synchronous motor (SPMSM) speed control system, this paper proposes a high-order sliding mode control strategy based on a new super twisting algorithm (NSTA). +is strategy introduces an adaptive term in its proportional term based on the original super twisting algorithm, which solves low reaching speed and poor antidisturbance ability due to the square root calculation of proportional term in the original super twisting algorithm.+e simulation results show that the proposed strategy can effectively improve the system’s response speed and antidisturbance and greatly suppress the chattering phenomenon of traditional sliding mode control. Introduction Surface-mounted permanent magnet synchronous motors are widely used in aerospace, numerical control systems, wind power generation, and new energy electric vehicle drive systems due to their high efficiency and small size [1,2]. However, it is a nonlinear, strong coupling, multivariable complex object. Although a traditional PI controller can meet the control requirements within a certain range, it cannot meet the requirements of high-performance control when system parameters change or are affected by external uncertainties. To solve the problems caused by traditional PI control, scholars from all over the world have conducted much research, and some modern control theory achievements have been successfully applied to the speed control system of SPMSM, such as adaptive control, fuzzy control, neural network, active disturbance rejection control, and sliding mode control. Among them, sliding mode control is widely used because of its robustness, fast dynamic response, and easy implementation [3]. Although the traditional sliding mode control improves the robustness of the system, when the sliding mode control is applied to the actual system, due to the time delay and space lag of the switch, the error of the state detection, and other factors, it is easy to cause the system chattering phenomenon and reduce the dynamic quality of the system [4]. erefore, how to suppress chattering is the key to the application of sliding mode control. Chinese scholar Gao proposed a sliding mode control strategy based on the reaching law in 1996, which effectively improved the system's dynamic quality and reduced the sliding mode chattering to a certain extent [5]. e authors in literature [6] proposed a sliding mode control strategy based on exponential reaching law and Sigmoid function, which effectively suppressed sliding mode chattering. However, due to the introduction of the Sigmoid function, the convergence speed and stability of the system are reduced to a certain extent, and the exponential reaching law has contradictory problems between sliding mode chattering and reaching speed and robustness. e authors in literature [7,8] designed a sliding mode control strategy based on integral sliding mode surface, which introduces the integration of state variables into the conventional sliding mode surface, which can effectively eliminate the steady-state error of speed and torque and accelerate the system response speed and at the same time it has strong robustness to load disturbance. However, the introduction of integral sliding mode surface is easy to produce integral saturation, which leads to large overshoot, which affects the quality of system control. e authors in literature [9][10][11][12] proposed a terminal sliding mode control strategy that improves the convergence of the system so that the system state converges to a given trajectory within finite time, but there is a singularity problem. erefore, nonsingular terminal sliding mode control is proposed. It directly avoids controlling the singular area from the sliding mode design and retains the finite-time convergence characteristics of the terminal sliding mode. However, when the system state is far from the equilibrium point, its convergence time is relatively long and the dynamic characteristics worsen [13][14][15]. e author in literature [16] proposed a second-order sliding mode control: a super twisting algorithm composed of an integral and proportional term. e proportional term plays a role in reaching speed. When the moving point reaches the sliding mode surface, the integral term makes the system trajectory move around the origin to ensure the continuity of the output control signal and thus reduce the chattering phenomenon in the sliding mode control. In literature [17], the super twisting algorithm was applied to the SPMSM speed regulation system and compared with the sliding mode control based on exponential reaching law. It was found that the algorithm effectively solved the problem of the contradiction between chattering, reaching speed, and antidisturbance ability in the traditional sliding mode control under the premise of suppressing sliding mode chattering. However, the determination of the gain of this algorithm requires that the disturbance term is differentiable and bounded. In practical applications, this bound is difficult to determine. In order to ensure the convergence of the system, an excessively large gain value is often selected, which may cause serious system chattering-even crash. erefore, the authors in literature [18] proposed a PMSM speed control system based on the adaptive super spiral algorithm, which effectively solved the problem of overestimating the gain of the existing algorithm and improved the system's stability. However, none of the above strategies consider insufficient reaching speed and poor antidisturbance ability of the super twisting algorithm because its proportional term is the square root calculation [19,20]. In order to solve the above problems, this paper proposes a sliding mode control strategy based on a new super twisting algorithm. is strategy introduces an adaptive term in its proportional term based on the original super twisting algorithm, which solves low reaching speed and poor antidisturbance ability due to the square root calculation of proportional term in the original super twisting algorithm. On the premise of ensuring that the system chattering is not increased, the dynamic quality of the SPMSM speed control system is improved. The Motor Motion Equation of SPMSM In this paper, the SPMSM is taken as the research object, and its motion equation can be expressed as follows [1]: where p n is the pole pairs; ψ f is the rotor flux linkage; ω is the speed; β is the viscosity friction coefficient; J is the moment of inertia; T L is the load torque; and i q is the q-axis current. Sliding Mode Controller Based on Exponential Reaching Law. e linear sliding surface is as follows: where ω * is the reference speed of the motor; In recent years, the sliding mode algorithm based on the reaching law has been widely used in SPMSM speed controller design because it can guarantee the dynamic quality of the reaching motion. e exponential reaching law is as follows [13]: where s is the sliding mode variable and ε and q are the sliding mode control switching gain. Derivation of equation (2) is obtained, and combining with equation (1), we obtain the following: Combined with equations (3) and (4), we obtain the following: us, the design control law can be obtained as follows: us, the reference current of the q axis can be obtained as follows: As shown in equation (7), chattering will occur in the system due to a discontinuous term εsign(s) in the reference current. Secondly, the reaching speed, chattering, and robustness of the system are all related to the value of ε and q. e larger ε and q are, the stronger the robustness and the faster the reaching are, but the chattering will also be more 2 Complexity significant. erefore, there is a contradiction between system chattering and reaching speed and robustness. e Lyapunov function is defined as follows: e derivative of V can be obtained as follows: Combined with equations (1), (4), and (6), we can get where both are normal numbers and εsign(s)s + qs 2 is always greater than 0, so _ V < 0. According to the Lyapunov stability criterion, the SMC speed controller based on the exponential reaching law is stable. Speed Controller Design Based on NSTA-SMC. e highorder sliding mode algorithm provides a solution for the contradiction between system chattering and reaching speed. Unlike other high-order sliding mode algorithms, the super twisting algorithm does not need to derive the sliding mode surface so that the sliding mode surface and its derivative can be stabilized to zero simultaneously, avoiding that the introduction of noise control law design is simple. So, its general expression is as follows [14]: where s is the sliding mode variable; φ is the disturbance term; α and β are sliding mode gain coefficients; and sign() is the switching function. From equation (11), it can be seen that the proportional term α|s| 1/2 sign(s) plays a role in improving the reaching speed of the algorithm, but its sliding mode surface is calculated by square root, and the gain of the scale term directly affects the reaching speed and antidisturbance ability of the algorithm. In order to improve the antidisturbance ability and reaching speed of the algorithm, a new super twisting algorithm is proposed in this paper, namely: where ks is a linear term and k > 0. e sliding surface is selected as follows: Derivation of equation (13) gives Combined with equations (1) and (14), the output of the controller is as follows: Combined with equations (12) and (15), the output of the controller is as follows: Theorem 1. For equation en, equation (16) can converge to the origin in finite time. Let be ((B/J)ω + (T L /J)) � φ, use variable substitution: us, equation (19) can be rewritten as follows: For equation (16), the quasi-quadratic Lyapunov function [22] is selected as follows: where ζ T � [ζ 1 , ζ 2 ] � [|z 1 | 1/2 sign(z 1 ), z 2 ]; Π is a real symmetric positive definite matrix; take According to equation (19), V(z 1 , z 2 ) is a radially unbounded continuous positive definite function and V(z 1 , z 2 ) is differentiable everywhere except for the set z 1 � 0 . e derivative of V(z 1 , z 2 ) along the system trajectory can be obtained as follows: where Obviously, ρ � 2|z 1 | 1/2 and _ φ � 2|ξ 1 | _ φ are scalars, and it can be seen from the expansion calculation of B T Πξ and ξ T ΠB that B T Πξ and ξ T ΠB are also scalars, and according to equation (22), Π � Π T can be seen, so Let be m � B T Πξ � ξ T ΠB and m 2 � ξ T ΠBB T Πξ according to the inequality From equation (25), we can know According to eorem 1, suppose the perturbation term φ is Lipschitz continuous, and 2| _ φ| ≤ δ, δ > 0. Combining ρ � 2|z 1 | 1/2 and _ φ � 2|ξ 1 | _ φ, we obtain Let be C � 1 0 , we obtain Combining equations (23) to (28), it can be concluded that where Obviously, for equation (29), when Q is a positive definite matrix, we obtain us, the Lyapunov function V(z) of the system shown in equation (16) satisfies the stability conditions in the Lyapunov stability theory of V > 0, which is radially unbounded and _ V < 0. By further expanding and calculating Q, we can get If Q > 0, then _ V < 0; according to Schur's complement lemma [23], it can be deduced that a sufficient and necessary condition for Q to be a positive definite matrix is Since k > 0, and in the existing system, there must be |ω * − ω| 1/2 max ≥ |z 1 | 1/2 ≥ 0, and equation (32) can be transformed into the following: e proof is completed. e Influence of Linear Term on the Stability and Reaching Speed of the System. According to the above proof process, the linear term ks is introduced into the proportional term of the super twisting algorithm, so long as k > 0 is guaranteed, the system's stability will not be affected. At the same time, when the system state of the linear term ks is close to the sliding mode surface, that is, s < 1, ks reaches 0, the proportional term becomes the square root reaching law α|s| 1/2 sign(s), and there is still the problem of low reaching speed. In order to solve this problem, the adaptive term k|s| b·sign(|s|− 1) s is introduced in this paper, and equation (16) becomes where k|s| b·sign(|s|− 1) s is the adaptive term, k > 0, 0 < b < 1. Simulation Research In order to verify the antidisturbance ability and chattering suppression ability of the strategy proposed in this paper, a simulation model was built in Matlab/Simulink according to Figure 1. SPMSM parameters used for simulation are shown in Table 1. In order to prove the effectiveness of the proposed new super twisting algorithm, NSTA-SMC, STA-SMC, the sliding mode control based on the exponential reaching law (SMC), and PI control performance are compared and simulated. e premise is that the above four methods of current loop using PI controller and the parameters are the same. e simulation results are shown in Figure 2. Among them, NSTA-SMC parameters designed in this paper are α � 1500, β � 60000, k � 600, and b � 0.5; PI parameter is k p � 0.1 and k i � 3; SMC1 parameter is ε � 500000, c � 60, and q � 300; SMC2 parameter is ε � 800000, c � 60, and q � 300; STA-SMC parameter is α � 1500 and β � 60000. Figure 2 shows the system adopts no-load start mode, and the given speed is 1000 rpm. From Figure 3 and Table 2, compared with PI, SMC1, SMC2, and STA-SMC, NSTA-SMC has the smallest starting overshoot, shortest regulation time, and fastest response speed. At 0.2 s, the load changes to 10 N·m. Figure 4 and Table 2 show that when the load changes suddenly, the PI speed drop is the largest and the regulation time is the longest. Compared with PI, the speed drop of SMC1 and SMC2 is smaller, and their size is closely related to the size of controller parameters ε and q. Compared with STA-SMC, the speed drop of NSTA-SMC with the introduction of the adaptive term is 51.9% less than that of STA-SMC and it can quickly recover to a given speed, which effectively solves the problem of low reaching speed of the original super twisting algorithm and further improves the antidisturbance ability of the system. At 0.4 s, the speed suddenly rises to 1200 rpm. From Figure 5 and Table 2, when the speed changes suddenly, the PI overshoot is the largest and the regulation time is the longest; although SMC1 and SMC2 have no overshoot, the regulation time is much longer than that of STA-SMC and NSTA-SMC. At 0.6 s, the load of 10 N·m is removed. From Figure 6 and Table 2, compared with PI, SMC1, SMC2, and STA-SMC, NSTA-SMC has the slightest change in speed and quickly recovers to a given speed, with good antidisturbance ability. From Figure 7 and Table 2, when the speed is 1200 rpm, the steady-state error of PI speed is the largest, and the control quality is poor. e system chattering of SMC1 and SMC2 is closely related to the size of the controller parameters ε and q. Compared with SMC, NSTA-SMC effectively suppresses sliding mode chattering and effectively solves the contradiction between the reaching speed of SMC and system chattering. Compared with STA-SMC, NSTA-SMC improves the system's response speed and antidisturbance ability without increasing the system chattering and further improving the system's dynamic quality. From Figures 8 and 9, compared with PI, SMC1, SMC2, and STA-SMC, the control strategy proposed in this paper has the fastest torque response speed and smaller torque fluctuation. When dealing with external disturbance, it takes the least time to restore to the 8 Complexity original torque and has the best dynamic response performance. Conclusion (1) Compared with STA-SMC, NSTA-SMC with adaptive term effectively solves the problem of low reaching speed and poor antidisturbance ability caused by the square root calculation of proportional term in the original super twisting algorithm without increasing system chattering and improves the dynamic following performance of the system. (2) Compared with the SMC based on the exponential reaching law, the NSTA-SMC effectively solves the contradictory problems between sliding mode chattering and reaching speed and antidisturbance ability under the premise of suppressing chattering and further improving the system control quality. (3) Compared with PI, NSTA-SMC effectively improves the PI control excessive overshoot and poor antidisturbance ability and improves the dynamic and static quality of the system. (4) At the same time, how to ensure the optimal sliding mode gain of the new super twisting algorithm has become the research focus to further improve the system performance. Data Availability e data used to was support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest related to the article.
2021-09-09T20:46:06.755Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "ce78c9e4f039633bae9b9671b3fe266f2ace1eca", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/complexity/2021/2886789.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "22407b2522e84767206fcde747376d882766ed6c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
5036422
pes2o/s2orc
v3-fos-license
Facile Surfactant‐Free Synthesis of p‐Type SnSe Nanoplates with Exceptional Thermoelectric Power Factors Abstract A surfactant‐free solution methodology, simply using water as a solvent, has been developed for the straightforward synthesis of single‐phase orthorhombic SnSe nanoplates in gram quantities. Individual nanoplates are composed of {100} surfaces with {011} edge facets. Hot‐pressed nanostructured compacts (E g≈0.85 eV) exhibit excellent electrical conductivity and thermoelectric power factors (S 2 σ) at 550 K. S 2 σ values are 8‐fold higher than equivalent materials prepared using citric acid as a structure‐directing agent, and electrical properties are comparable to the best‐performing, extrinsically doped p‐type polycrystalline tin selenides. The method offers an energy‐efficient, rapid route to p‐type SnSe nanostructures. Growing global energy demands,together with the negative impacts resulting from combustion of fossil fuels,h ave diverted attention to technologies for sustainable energy generation and conversion. [1] Thermoelectrics realize direct inter-conversion between thermal and electrical energy and provide opportunities to harvest useful electricity from waste heat (and conversely to perform refrigeration). Thet hermoelectric conversion efficiencyo famaterial is determined by its dimensionless figure of merit, ZT= S 2 sT/k,where S, s, T, and k represent the Seebeck coefficient, electrical conductivity,a bsolute temperature,a nd thermal conductivity,r espectively. [2] Extensive efforts have been devoted to the improvement of the thermoelectric performance of state-of-the-art materials, [3] and to the discovery of new thermoelectrics [4] with ZT values > 2. Single-crystalline SnSe combines ah igh ZTwith arelatively low toxicity and high Earth-abundance of the component elements. [4] SnSe crystals possess very low thermal conductivity owing to lattice anharmocity,y ielding record high ZTvalues of 2.6 and 2.3 at 923 Kalong the b and c crystallographic directions,r espectively. [4] Polycrystalline SnSe materials have been fabricated to improve mechanical properties, [5] but ZT has been limited to 1, owing to both increased electrical resistivity and thermal conductivity. [5] Unfortunately,t he synthesis of SnSe is protracted and energy-intensive,i nvolving heating, melting, and annealing at high temperatures ( % 800-1223 K). [4][5] Before the potential of SnSe can be realized, afast, cost-effective,and large-scale synthesis route to the pure selenide that does not sacrifice performance is essential. Nanostructuring very effectively enhances ZT. Theh igh density of interfaces improves phonon scattering, decreasing the lattice thermal conductivity. [2,3] Bottom-up solution synthesis methods facilitate control of size,m orphology,c rystal structure,and defects. [6] However,the organic surfactants that can control morphology through surface modification are commonly electrically insulating, which can drastically reduce the electrical conductivity of the materials. [7] Ligand replacement methods switch smaller species for long chain surfactant molecules, [7] but sometimes involve using high toxicity chemicals, [8] and introduce impurities, [7b] which again can adversely influence the transport behavior of the materials. [7b] Organic contamination can be prevented only if as uitable surfactant-free synthesis strategy can be found, [9] and to date solution syntheses of SnSe nanostructures have required organic surfactants and/or solvents,f or example,o leyamine, trioctylphosphine selenide,a nd bis[bis(trimethylsilyl) amino]tin(II), while only yielding milligram quantities of materials. [10] In this study,w ed emonstrate as urfactant-free aqueous solution approach towards the preparation of > 10 g of SnSe nanoplates,b yb oiling am ixture of NaHSe and Na 2 SnO 2 solutions for 2h.The phase-pure nanoplates can be hot pressed into dense pellets with outstanding thermoelectric power factors (Scheme 1). Tr ansmission electron microscopy (TEM;F igure 1c) showed that the SnSe nanoplates were almost uniformly rectangular,a nd selected area electron diffraction (SAED) patterns obtained with the incident electron beam normal to the face of the nanoplate could be indexed along the [100] SnSe zone axis.Aset of lattice spacings of % 3.0 intersecting with an angle of 93(1)8 8 could be measured from high resolution TEM (HRTEM;F igure 1d)c orresponding to the {011} plane spacings.C ombined with SAED data, the nanoplate face can thus be identified as the bc plane of SnSe and the side facets are defined by {011} planes (Figures 1c,S 4). Theobs erved splitting in diffraction spots suggested twin defects induced by orthorhombic distortion. [12] Images and SAED patterns along the [001] zone axis (beam direction parallel to the nanoplate face;F igure 1e)v erified that:i )the plates are approximately an order of magnitude thinner in the third dimension, and ii)the bc plane forms the nanoplate faces. Further,d iffraction spots are elongated along [100],i ndicating planar defects along the a axis. [13] Lattice spacings of % 5.7 ( d (200) )a nd 4.2 ( d (010) )w ere observed in the corresponding HRTEM image (Figure 1f). Intermediate products synthesized after only 1min of heating were investigated to understand the morphological evolution. Thep roduct is single-phase orthorhombic SnSe ( Figure S5a) composed principally of irregular, near-rectan- Figure S4) the facets of the SnSe truncated nanoplate can be depicted as shown in Figure 2a. Hence,the SnSe truncated nanoplate is enclosed by {100} and {011}, together with {001} facets.G iven that no surfactant is used, the nanoplate shape is determined primarily by the intrinsic features of the anisotropic selenide crystal structure. Atomic planes with high surface energies usually exhibit fast growth rates,a nd in SnSe the {001} and {010} planes possess much higher surface energies than the {011} planes. [14] The former planes would thus experience faster initial growth than the {011} planes.T om aintain the minimum surface energy as growth progresses,t he {001} and {010} planes diminish, while the {011} planes feature increasingly in the side facets (Figures 2a,c) until they dominate completely (Figures 1c,2d). TheN aOH concentration is also important in regulating growth, and by decreasing the molar ratio from 15:1 to 15:2 the mean length/width of the SnSe nanoplates is reduced from % 150 nm to % 80 nm (Figures S6, S7). Decreasing the hydroxide concentration further has more profound effects on the reaction chemistry (see the Supporting Information). Thea bility to prepare > 10 gs urfactant-free SnSe nanomaterials allowed the fabrication of high-density pellets through hot pressing without the necessity of high temperature annealing. Pellets of % 95 %theoretical density,retaining the orthorhombic SnSe structure were obtained (denoted 1; Figure 3a;T ables S3, S4). Strong orientation of the plates in the bc plane is evidenced by the increased intensity of the (h00) PXD reflections,a nd the decrease in peak half-widths indicates al arger crystallite size after hot pressing.T he indirect (direct) optical band gap from diffuse reflectance (DR) UV/Vis spectra [10c] narrows slightly from % 0.89 ( % 1.1) eV to % 0.85 ( % 1.0) eV ( Figure S10) when the nanoplates are consolidated into dense pellets,w hich could be related to sintering effects.T he values are very similar to the indirect band gaps reported for both single crystalline and polycrystalline SnSe. [4, 5c, d] 1 is composed of densely packed particles,t ypically % 200 nm across with flat surfaces (Figures 3b,S11a). TheSn:Se ratio remains at 49(1):51(1) atom % ( Figure S11b). An SAED pattern (Figure 3c), with the beam normal to the face of an anoplate taken from 1 was indexed along the SnSe [100] zone axis.T he single-crystal structure was confirmed by the HRTEM image (Figure 3d). TEM also showed that the nanoplate from 1 consisted of compacted smaller platelets ( Figure S11c). Thermogravimetric analysis (TGA) of 1 under both argon and air revealed negligible weight changes below 500 8 8C, but suggested that thermal decomposition and oxidation, respectively,b egin above this temperature ( Figure S12). Forc omparison, as econd sample of SnSe nanoparticles ( % 40-60 nm) were synthesized by ac itric-acid-assisted solution synthesis,w hich were also consolidated into dense pellets ( % 92 %o ft he theoretical density) by hot pressing (denoted 2;Figures S13, S14). Compared to 1, 2 possesses the same orthorhombic structure,as imilar optical band gap and forms comparable nanostructures ( % 200 nm across oriented in the bc plane). Importantly,h owever, Cl is detected in 2 (Sn:Se:Cl ratios of 51(1):48(1):1(1) atom %) that likely originates from the replacement of ligated citric acid by Cl during processing. [7b] Thes imilar densities and constituent particle sizes of 1 and 2 allowed for agood comparison of their relative electrical performance.The electrical conductivity of 1 (Figure 4a)increases four-fold from % 840 Sm À1 at 300 Kto % 3500 Sm À1 at 550 K. Themagnitude of the values for 1 can be attributed to the high crystallinity,s mall band gap, surfactant-free particle surface,m icrostructural orientation, and the high level of sintering and densification achieved. By Angewandte Chemie Communications contrast, 2 exhibited electrical conductivity increasing from % 55 Sm À1 at 300 Kt oo nly % 250 Sm À1 at 550 K; more than an order of magnitude lower than 1. Thec ontrast in the variation in the Seebeck coefficient with temperature for 1 and 2 is striking (Figure 4b). S for 1 increases almost linearly with temperature (250 mVK À1 at room temperature to 340 mVK À1 at 550 K). By comparison, 2 shows n-type behavior at room temperature (S % À150 mVK À1 ), with the value of S becoming positive (p-type behaviour) at % 490 Kand rising to % 80 mVK À1 at 550 K. It is possible that the n-type conducting behavior correlates to the presence of Cl and/or aslight excess of Sn, as noted above.We are currently investigating this behavior further in systematic doping experiments.A nn /p or p/n inversion with increasing temperature has also been observed in pellets consolidated from PbTe, Ag 2 Te,a nd PbTe 0.1 Se 0.4 S 0.5 synthesized through surfactant-assisted solution methods, [7,15] and should be related to the thermal activation of higher concentrations of positive or negative charge carriers,r espectively. [7b] It is also notable that both s and S increase with temperature for 1. This phenomenon has been observed in both un-doped and iodine-doped polycrystalline SnSe. [5c,d, 16] Although the origins of the behavior for 1 require further investigation, the combination of superior s values coupled with high values of S leads to exceptional power factors ( % 0.05 mW m À1 K À2 at 300 Kto% 0.40 mW m À1 K À2 at 550 K; Figure 4c). In contrast, the power factors for 2 are much lower, (0.001 mW m À1 K À2 at 300 Kand reaching only 0.05 mW m À1 K À2 at 550 K). Thehuge differences in performance between 1 and 2 further emphasize the importance of the surfactant-free synthesis route,not just in the context of as impler, more sustainable synthesis method, but also in delivering significantly improved electrical properties consistently ( Figure S15). Notably,the power factors for 1 far exceed those for un-doped polycrystalline SnSe across as imilar temperature range (0.028-0.04 mW m À1 K À2 ), [5c-e] and are comparable to those for holedoped materials with high carrier concentrations. [5d, 17] Recent Na-and Ag-doping studies have elegantly demonstrated how the electrical performance and ZT values of SnSe single crystals can be dramatically improved. [18] Given that the samples in our studies were non-optimized, strategies involving systematic hole doping,i nc onjunction with surfactantfree nanostructuring approaches,s hould yield even higher performing p-type SnSe materials and pave the way for onepot synthesis of p-and n-type SnSe nanomaterials. In summary,asimple,q uick, surfactant-free,a nd energyefficient solution synthesis yielded SnSe nanoplates in gram quantities.T he ensuing nanostructured pellets exhibited exceptional electrical conductivity coupled with high Seebeck coefficients,l eading to power factors surpassing those of polycrystalline and surfactant-coated counterparts.T he technique should be readily adaptable to include dopants and amenable to the discovery of further materials,both p-and ntype,with enhanced thermoelectric properties. Experimental Section Full experimentaldetails are provided in the SupportingInformation. Materials Synthesis.1 00 mmol NaOH and 10 mmol SnCl 2 ·2 H 2 O were added into 50 mL deionizedw ater to yield at ransparent Na 2 SnO 2 solution. 50 mL of NaHSe (aq) preparedfrom Se and NaBH 4 was injected into the boiling solution, leading to the immediate formationofablack precipitate.T he mixture was boiled for 2h,and cooled to room temperature under Ar (g) on aS chlenk line.T he products were washed with deionized water and ethanol and dried at 50 8 8Cf or 12 h. Scaled-up syntheses were performed with six-fold precursor concentrations (94(1)% yield). Fort he surfactant-assisted synthesis,5 0gcitric acid was introduced into SnCl 2 solution with no addition of NaOH and the reaction duration was increased to 24 h.
2017-10-28T15:59:26.597Z
2016-04-20T00:00:00.000
{ "year": 2016, "sha1": "f73f8f3383e0a38664e698a945309751355520e8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/anie.201601420", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8f4dc5cd4d0727de79a17ae1598709dd2bddb689", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
55517483
pes2o/s2orc
v3-fos-license
Authenticity , Boundaries , and Hybridity : Translating “ Migrant and Minority Literature ” from Swedish into Finnish This article analyzes the representation of linguistic variation in the Finnish translations of four Swedish coming-of-age stories depicting migrant or minority perspectives: Mikael Niemi’s 2000 Popular Music from Vittula, Jonas Hassen Khemiri’s 2003 Ett öga rött, Marjaneh Bakhtiari’s 2005 Kalla det vad fan du vill, and Susanna Alakoski’s 2006 Svinalängorna. Through an analysis of speech and thought representation techniques and focalization, the article explores the role played by literature and translation in the materialization of dialects and sociolects as bounded entities. The paper argues that linguistic and social hybridity, on which the reception of minority and migrant literatures often focuses, is accompanied by the reification of new varieties conceived as authentic expressions of migrant and minority experience. Literature and translation are active agents in such processes, which are largely based on cultural, discursive, and cognitive constraints that condition the interpretation of each Introduction: Boundaries, hybridity, and authenticity The story of growing up between two or more cultures has been a popular theme in Western narrative fiction for decades.Perhaps the best known comingof-age story of this kind is Zadie Smith's White Teeth (2000), although predecessors have appeared since at least the early 1980s, for example in France (Mehdi Charef's Le Thé au harem d 'Archi Ahmed, 1983) and in Sweden (Finland-born Antti Jalava's Asfaltblomman, 1980).Such themes are not new: for example, African-American authors such as Richard Wright (Black Boy, 1945), James Baldwin (Tell Me How Long the Train's Been Gone, 1968) and Toni Morrison (The Bluest Eye, 1970) have explored them.Recently, coming of age between two (or several) cultures has also become a popular motif in Scandinavian literature, especially in Sweden. The linguistic, thematic, and narrative similarities between different coming-ofage stories focusing on migrant or ethnic minority experience suggest that these novels form a distinct genre.The most important linguistic feature of this genre is abundant variation in sociolect and register in the representation of the characters' speech and thought, as well as in the narrator's discourse.Thematically, this variation is often interpreted as an intrinsic characteristic and manifestation of hybrid identities: the characters of the novel, especially the protagonist, appear to navigate between two or more ethnic and social identities, in particular between a majority identity and a minority identity.The exploitation and perception of boundaries between different languages and language varieties and their blurring, resulting in hybridity, therefore constitutes one of the essential features of this novelistic genre. The perceived hybridity on the level of language and identity is linked to narrative techniques.Thus, first-person narratives in particular (the most common narrative framework employed within this genre) are often read as (pseudo)-autobiographies because the narrator and the protagonist are conflated (Genette 1972: 214, 236;1982: 71), and because readers have a natural tendency to conflate the author with the narrator (Gavins 2007: 129).However, classical third-person narratives belonging to this genre may receive a similar reading.Therefore, it appears that an essential criterion for genre membership is that the author is familiar with the social and linguistic world depicted in the story.Otherwise, the representation of that universe would not be authenticthe author would appear to be unreliable (cf.Cohn 2000).Hence, authenticity emerges as another key concept when analyzing this genre.This article examines authenticity, hybridity, and boundaries in four Swedish novels belonging to this genre and their Finnish translations.The emphasis is on the analysis of the relations between the protagonist, narrator, and author.First, I will provide a brief overview of the ways in which the translation of nonstandard language in narrative fiction has been treated in previous research in translation studies and beyond.Second, I will analyze the representation of sociolinguistic variation in the Swedish source texts and in Finnish translations of Mikael Niemi's Populärmusik från Vittula (2000, also published (['Swine Projects' or 'Swine Rows'], 2006).This analysis, which focuses mainly on speech and thought representation, is necessarily a linguistic one.Third, I will examine the ways in which the protagonist, the narrator, and the author may be approached using the narratological concept of focalization or point of view.Since the concept of focalization is not linguistic per se (although focalization is of course created linguistically), this analysis will be more succinct than the linguistic analysis of speech and thought representation.To conclude my article, I will analyze whether the concepts of hybridity, boundaries, authenticity, and polyphony are useful in explaining both the emerging genre of multicultural coming-of-age stories in Sweden and some of the translation strategies that can be observed in Finnish translations of novels belonging to this genre.The goal of the article is not to criticize the translations or the translators, who demonstrate excellent analytical, stylistic, and creative ability. The concepts of authenticity, hybridity, and boundaries link the analysis to contemporary sociolinguistics and linguistic anthropology (see e.g.Coupland 2014;Heller & Duchêne 2014;Heller 2014;Pietikäinen & Dlaske 2013), in which concepts familiar from variationist sociolinguistics, such as speech community and native speaker, as well as the epistemological foundations of linguistics and sociolinguistics, have been problematized and questioned.The concept of polyphony connects the analysis to text linguistics and the theory of argumentation and énonciation.Thus, I will discuss the inherent polyphony of utterances, texts, and language use in generalpolyphony in the sense of multiple possible voices, sources or loci of discursive responsibility, and contexts (see Halliday 1978: 31;Ducrot 1980).This discussion will focus on the potential consequences of our ability or inability to interpret polyphony. Among other things, the article aims at providing interfaces between linguistic approaches to variation on the one handsuch as those conceptualized within narratology and translation studiesand contemporary sociolinguistic and textlinguistic theory on the other.Indeed, a formalist linguistic or pragmatic analysis would not suffice to answer why this particular novelistic genre has emerged and how and why certain translation strategies related to this genre can be identified. Translating non-standard language By definition, spoken language cannot be reproduced in written form: we use letters, punctuation, and typographical devices to stylize our writing, but we cannot write sounds, intonations, or pauses, not to mention the situational, social, and intertextual context in which spoken utterances are produced.Consequently, narrative fiction can only index and evoke the characteristics of spoken language and constitute a representation rather than a reproduction of spoken language.Means used to do this include word order, orthography, punctuation, and narrative report of speech act (see Leech & Short 1981: 323 for this term, which is not related to speech act theory, referring to passages in which the narrator tells what the characters said and how).In many languages, there is a long tradition of written representation of spoken language.Thus, when characters in a novel speak (and think) in a way that corresponds or is close to what is conceived as the norm of standard written language, their speech and thoughts appear to be reproduced faithfully because we think that the norm of standard language is identical in writing and speech.But this is only an illusion of mimesis. The issue is much more complex when the language use of a character or the narrator is not standard and appears to represent a distinct regional dialect, sociolect, ethnolect, idiolect, or even a foreign language, for there are few conventions for the written representation of such varieties.During the 20 th century, the representation of sociolinguistic variation has become so common in Western literature that its presence has been normalized to a certain degree (Fludernik 1996: 71).Many terms have been proposed to theorize this variation: literary sociolects (Lane-Mercier 1997: 46-47), standard vs. non-standard literary dialect (Määttä 2004), and heterolingualism (as opposed to sociolinguistic variation and multilingualism in the "real world" Grutman 2006: 18).Linguistic hybridity (reflecting multilingualism) in the novel has also been analyzed as pertaining to different categories of translation on the part of the character or the narrator and divided into symbolic hybridity, in which language is only a medium, and iconic hybridity, in which language as a medium and object are the same (Klinger 2015: 16-23). As scholars such as McHale (1994), Lane-Mercier (1997: 46), Ramos Pinto (2009: 290), andTaivalkoski-Shilov (2006: 48) note, the representation of social and regional variation is based on stereotypes and assumptions about sociocultural and linguistic differences that people recognize by a minimal number of differentiating markers.This representation is always related to ideologies, i.e. sets of values, and especially language ideologies, i.e. cultural conceptions of the nature, purpose, and function of language (Gal & Woolard 1995: 134;Woolard & Schieffelin 1994).In this sense, language use and its representation is never neutral: it is impossible to use language without simultaneously conveying attitudinal information (Fowler 1977: 76).All linguistic units and varieties can therefore be conceived not only as having a communicative function but also an indexical function: they index social phenomena such as group membership and identity.These indexicalities, i.e. hierarchically structured, stratified, and primarily local sociocultural dimensions of meaning, can be conceived as "projections of functions onto form" (Blommaert 2006: 164-165;Silverstein 1979).In other words, when sociolects, dialects, and registers are represented in literature, the linguistic forms of which they are composed are recognized as referring to a given sociocultural group and/or identity.However, this representation does not constitute a verbatim reproduction of actual speech.Thus, the representation of sociolinguistic variation in a novel does not reflect real language use; it refracts it, and this refraction is always ideological (Blommaert 2006: 173) because it is based on values and beliefs (and sometimes stereotypes) related to individuals and groups.This representation is sociocultural: although the basic meaning of the forms used in this representation may be relatively stable, they acquire different indexical meanings depending on their social and cultural context, including the specific contexts that are activated when a reader reads and interprets the text. Consequently, translation is always a process of re-contextualization (House 2006), i.e. a transcription of a source text into a new context, because indexicalities and sociocultural dimensions of meaning differ from one language and culture to another, from one language variety and sub-culture to another, from one situation to another, from one reader to another, from one era to another.Naturally, some of these indexicalities and sociocultural meanings, which are part of the context, are shared.Otherwise, translation and indeed communication would be impossible.However, the contexts within which readers of a text operate can never be identical, and they are even more likely to be different if we compare readers who read the source text and those who read the target text: their initial contexts are different, and the contexts activated by the text cannot be exactly the same in different language versions.Each version has its own order of indexicality contingent upon the culture(s) related to that particular language because each text and each word and construction composing a text has its own order of indexicality. Hence, orders of indexicality related to the representation of sociolinguistic variation are part of the context that is transformed in translation.Indeed, for the literary translator, non-standard language constitutes an important challenge, and its translation is often characterized as an impossible task (see e.g.Folkart 1991: 343;Lane-Mercier 1997;Sánchez 1999;Ramos Pinto 2009: 291).However, non-standard language in the novel has to be translated.The concept of translation strategies is a useful tool for analysing such translations.Translation strategies depend on time and language-contingent factors such as translation norms and translation culture (see Ramos Pinto 2009 for a comprehensive survey of strategies identified in previous literature).A common strategy used in the translation of non-standard language use has been to look for an equivalent variety in the target language, but this strategy usually alters the narrative, social, and ideological constellation (see e.g.Berman 2000Berman [1985]]: 286; Berthele 2000;Määttä 2004).In other words, while the goal of this strategy is to create a context that is similar to the source text, the new context in fact takes the target text even further from the original context than a more neutral strategy, effacing hybridity, would do.This is why other strategies such as those based on the analysis of the function of the variation have been proposed (Hatim & Mason 1997: 97-109).However, as will be shown later, the search for authentic equivalence is still common, both among literary critics and translators.Besides, the variety-to-variety approach seems to function quite well in cases in which it is assumed that the social stratification of the two cultures is similar enough, and when there is a solid tradition of literary representation of non-standard language in the target culture.Examples include translations of drama in Quebec and Scotland (Brisset 1990;Findlay 1996). Thus, non-standard language in translation is also related to the cultural distance between the source text and the target text, i.e. the cultural fidelity of the translation or the spectrum of integrating vs. alienating translations, discussed by Schleiermacher (1963) as early as 1813.House (2006: 437) analyzes the same phenomenon in terms of overt and covert translations and Venuti (1995Venuti ( , 2000) ) within the domestication-foreignization dichotomy.Venuti's terminology is widely used in translation studies today, and he has identified domestication rather than foreignization as a typical feature of translations of world literature into English.The domestication-foreignization dichotomy is inherently politicalforeignization is a strategy through which minor literatures as well as linguistic and cultural heterogeneity can be acknowledged within the target culture.But such foreignization can also become a double-edged sword and a tool for a new, subtler exoticism, just as any other discursive strategy may (Buzelin 2006: 104;cf. Arrojo 1994: 159-60;Lane-Mercier 1997: 64).Moreover, the situation is quite different when the direction of translation is towards a minor literature such as Finnish literature (Paloposki and Oittinen 1998): since the source text typically comes from a major culture, the acknowledgement and empowerment or the exoticization of the minor culture represented by the source text are not at stake.The texts analyzed in this article, however, constitute a special case because the source texts represent not only a majority culture but also cultural heterogeneity and minority cultures.One could argue that a text never represents just one culture and its values.This is particularly true when considering the novel, for as Bakhtin (1986) has shown, one of its characteristics is the ability to represent different ideological viewpoints and stances. The translation of a text that is linguistically and narratively hybrid can therefore provide interesting insights into the interpretation of the function and meaning of linguistic variation and the impact of translation strategies on the ideological structure of the novel.In fact, translations provide evidence of the text's take-up, i.e. reader-response (Mason 2014: 52), and have heuristic value (Buzelin 2006: 95).For example, translations inform us about the ways in which the translator has interpreted the function of linguistic variation.In the following sections, this heuristic value will be explored through an analysis of four Swedish novels into Finnish.Section 3 provides a linguistic analysis of voices in the four novels.The analysis of the first novel requires more space because it introduces many of the concepts used throughout the analysis.Section 4 expands the analysis to the narratological concept of focalization.Discussion and links to contemporary sociolinguistic theory through the concepts of authenticity, boundaries, hybridity, and polyphony follow in subsequent sections. Popular Music from Vittula Mikael Niemi's Populärmusik från Vittula was published in 2000 in Sweden and received the prestigious August prize the same year.Subsequently, the novel was translated into several languages, including Finnish (2001) and English (2003).The novel is particularly rich from a sociolinguistic point of view: language variation and diglossia are not only prevalent in the language use represented in the novel but also constitute some of its main themes.The story covers the main character's school years in Northeastern Sweden, on the Western bank of Torne River, which forms the border between Finland and Sweden in the North.The local population lives in a bilingual and diglossic situation: Swedish, on the one hand, and Torne River Valley Finnish or Meänkieli ('our language'), on the other.The language of the dialogue is often specified in narrative reports of speech acts (in the sense Leech and Short use this term) such as "he said in Finnish" or "I said in Finnish to make sure he understood."The first-person narrator also reflects upon language use and the sense of estrangement and otherness he and his peers feel because they think that they speak neither Swedish nor Finnish correctly.Furthermore, the novel represents the process of language attrition, whereby the main characters gradually use Meänkieli less frequently as they grow up and eventually move to Southern Sweden in search of more secure employment. Other languages are present as well: the main character's best friend, whose family is also Meänkieli-speaking, does not utter a single word, although he understands Finnish.However, at one point it turns out that he has miraculously learned Esperanto and speaks it fluently.There is also a German-speaking character, as well as a group of English-speaking relatives visiting from America.In addition, when the protagonist discovers popular music, English and the representation of English pronunciation by the local youths emerge in the text.Therefore, language is also one of the themes orienting the narration towards magical realism that is typical of this book.Thus, while the minority language situation and the diglossic relation between Meänkieli and Swedish is seen as shameful, multilingualism is also a source of enrichment and a resource, providing the characters with secret wisdom that monolingual speakers and readers do not have.The way in which Meänkieli is present in the text exemplifies this esoteric dimension of language use particularly well. In the source text, Meänkieli appears mostly in the direct speech of the characters, older secondary characters in particular.These utterances, expressions, and words are typically related to local culture.They are italicized and glossed verbatim in Swedish.In the instances in which Meänkieli appears, the rest of the dialogue is in Swedish.At least two hypotheses can be proposed regarding the communicative function of utterances in Meänkieli.If the communicative function of these utterances is to signal that the language of the diegesis is in fact Meänkieli rather that Swedish, the narrator functions as a translator, which would indicate narratorial control of the characters' speech and thoughts.But if the function is to signal that the language of the diegesis is a mixture of Swedish and Meänkieli, the narrator merely transcribes the characters' speech as it is and delegates control to the characters.Furthermore, the interpretation of the communicative function of this bilingualism depends on the languages the reader knows: a reader who does not know Finnish or Meänkieli does not necessarily know that the utterance in Meänkieli is a verbatim reproduction of the utterance in Swedish (or English in the English translation).In contrast, a reader who knows Finnish or Meänkieli sees a repetition of the Swedish utterance in Meänkieli (if the reader regards Meänkieli as a discrete language) or a dialect of Finnish (if the reader thinks that Meänkieli is a dialect of Finnish). In the Finnish translation, most utterances in Swedish are translated in standard Finnish, whereas utterances in Meänkieli are reproduced verbatim.As a result, bilingualism between two mutually unintelligible varieties (Meänkieli and Swedish) becomes bilingualism between two (mostly) mutually intelligible varieties (Meänkieli and Finnish).If the Finnish or Meänkieli-speaking reader thinks that the presence of Meänkieli indicates that the diegesis in fact happens in Meänkieli, the narrator appears to be translating the speech of the characters while controlling that speech.But if the reader interprets the passage as an instance of mixing the two codes, the narrator appears to be delegating some of that control to the characters.The interpretation therefore has an influence on the distance perceived between the narrating "I" and the characters, the experiencing "I" in particular. Bilingualism between two mutually unintelligible varieties and the estrangement created by the foreign language are therefore inevitably lost in the Finnish translation.But the translation attempts to remedy this loss by extending non-standard usage even to places where the source text is standard.The following passage, which depicts the particular beliefs of the Laestadianist revivalist movement in free direct speech, shows that the translator therefore seems to have opted for the interpretation according to which utterances in Meänkieli indicate that the language of the diegesis is in fact Meänkieli rather than Swedish: 1a) Vid jordfästningen ropar predikanterna at du dog i den levande tron.Saken är klar.Du dog i den levande tron, sie kuolit elävässä uskossa.Du kom till Muodoslompolo, vi har alla bevittnat det, och nu sitter du äntligen på Herren Gud Faders gyllene pakethållare i den eviga, änglatrumpetande nedförsbacken.(Niemi 2000: 23) 1b) At the funeral the preacher bellows on about how you died in the living faith.No doubt about it.You died in the living faith, sie kuolit elävässä uskossa.You got to Muodoslompolo, we all witnessed it, and now at long last you are sitting on God the Father's golden luggage carrier, freewheeling down the eternal slope accompanied by fanfares of angels.(Niemi 2003: 23) The italics marking non-standard lexical items in the Finnish text are mine: 1c) Hautajaisissa saarnaajat hehkuttavat, että sinä kuolit elävässä uskossa.Asia on täysin selvä.Sie kuolit elävässä uskossa.Sinä pääsit Muodoslompoloon, me kaikki olemme todistaneet sen, ja nyt istut viimeinkin Herran, sinun Jumalasi kultaisella pyöränhollarilla ja huilaat enkeltrumpettien pauhussa ikuista alamäkeä.(Niemi 2001: 25) But the foreign element that appears to be a translation also functions as a door to the inner circles of the diegesis, marking a change in the narratorial control of reported speech.The beginning of this passage is attributed to the narrating "I" (possibly followed by interior monologue of the experiencing "I" in "No doubt about it") and continues in free direct speech attributed to an unidentified revivalist preacher.The first non-standard word in the translation is the subject pronoun sie 'you' (in standard Finnish sinä), which is a strong marker of most Northern and Eastern dialects of Finnish and also appears in the same form in the source text.But two subsequent nouns (pyöränhollarilla, enkeltrumpettien) and one verb (huilaat) are also non-standard in the translation.The first noun can be associated with Northwestern dialects because of the Swedish loanword in the second part of the compound.For the second noun, the association with a particular dialect or any non-standard variety is not clearin fact the omission of the final -i of the first part of the compound links the word to a famous Christmas carol and religious contexts on the one hand, and to a type of flower on the other.As for the verb, it represents regionally unmarked non-standard Finnish, although the transitive usage is rather idiosyncratic.Both forms could also be regarded as somewhat archaic. While the source text is hybrid because it is bilingual, the Finnish translation is therefore a hybrid because it is a mixture of standard Finnish, unmarked nonstandard, non-standard marked as a representation of Northwestern dialects, and non-standard marked in a way that cannot be identified in a precise manner.As suggested above, an attempt to remedy the loss of the representation of multilingualism is a plausible explanation for the increased non-standard features of the Finnish text.But the fact that the preacher's free direct speech is rendered somewhat ironic by the metaphors of the source text (luggage carrier; angel trumpets used idiosyncratically as a deverbal adverb "angel-trumpeting") may also have influenced the increased dialect representation in this passage.In any case, intensified non-standard marking in this passage increases the distance between the narrating "I" and the unidentified secondary character. Accentuated non-standard representation in the translation is particularly visible in passages in which the characters' utterances are rendered in direct speech.In these instances as well, although there are scattered utterances in Meänkieli glossed in Swedish in the source text, the representation of non-standard, in this case clearly Meänkieli or North-Western Finnish dialect, spreads to other utterances of the same dialogue.This increased non-standard representation concerns even utterances that are standard (yet marked as representations of colloquial usage by means such as elliptical sentences) in the source text.Typically, one line of a secondary character's utterance in Meänkieli text triggers dialect representation extending to the rest of this character's and also other characters' speech in a given passage of the translation.For example, in the following passage, the main character is about to be molested by an older man (who has just magically turned into an older woman).The utterance in Meänkieli is italicized in the Swedish and the English text; the italics of the Finnish text are mine: In summary, bilingualism (and diglossia) between two languages in the source text becomes bilingualism between two varieties, standard and dialect, in the Finnish translation, and dialect representation spreads towards utterances that were not marked as foreign in the source text.This strategy appears to constitute an attempt to remedy the loss of bilingualism and alterity resulting from the fact that Finnish readers understand Meänkieli and interpret it as a Finnish dialect rather than as another language.In the target text, the increased morphological and lexical representation of dialect therefore favors the interpretation that events and experiences in the diegesis are thought and spoken in Meänkieli, i.e. non-standard.In fact, no translation strategy would have reproduced the sociolinguistic constellation of the source text faithfully.While this accentuated dialect representation also concerns utterances produced by the protagonist, it increases not only the distance between the narrator and the secondary characters, but also the distance between the narrating and the experiencing "I". Ett öga rött Jonas Hassen Khemiri's Ett öga rött was published in 2003, and the Finnish translation came out the following year.The novel is written in the form of a diary exposing the thoughts and deeds of a ninth-grader born in Stockholm of Moroccan parents and living with his father.The story covers one semester of the protagonist's life, although there are also flashbacks to previous events, especially events related to his late mother.The novel's universe is immersed in sociolinguistic variation: the character's father speaks Arabic with his friends and alternates Arabic and Swedish when communicating with his son.Occasionally, there are entire utterances or single words or short expressions in Arabic.On several occasions, the narrator specifies whether a given utterance was originally in Arabic or in Swedish.Furthermore, there are reflections about the diglossic quality of the Arabic language, as becomes apparent in the coexistence of different accents and the difference between classical written Arabic and spoken varieties.There are also many comments concerning the diglossia between standard Swedish (depicted by wordings such as "nerdy Sven language" or "Sven tone") and the colloquial variety used by the main character.On one occasion, the protagonist's father sees his diary and makes furious comments about the bad quality of his Swedish, wondering why the son has started to use such poor language although just a few years earlier "his Swedish was perfect." In this novel, non-standard language is not limited to occasional instances of speech and thought representation: it is present throughout the book and in the narrator's language use in particular.The most salient feature is word order: in subordinate clauses and following an adverbial at the beginning of a clause, the narrator uses the S-V constituent order instead of the reversed order, V-S, which would be the norm for standard Swedish in these cases.Other features are lexical, such as English words, slang words, and expressions borrowed from languages of immigration such as walla, 'I swear (to God)'.There are also expressions calqued on other languages (e.g.jag lovar, 'I swear', calqued on walla).Many of the non-standard lexical items are swearwords (knulla, 'fuck'), words that have become emblematic of the so-called "immigrant Swedish" such as guss ('girl', borrowed from Turkish kız; see Milani 2010: 127), and/or derogatory terms related to sexual minorities and ethnic groups.However, gays, Swedes, and Jews are not the only groups addressed by derogatory terms: there are also disrespectful comments on (dark-skinned and/or dark-haired) foreigners, such as svartskalle and blatte. In the Finnish translation, lexical and typographic means are used to render the non-standard flavor of the source text.Lexical means include Finnish swearwords, vulgar terms related to sexual activities, and slang words depicting persons and groups of people.The translation also uses non-normative punctuation, omitting most commas.But there are attempts to reproduce the word order of the source text in the translation, as shown in the following example (my italics in all language versions): 3a) Nästan han hade ringat in mig i hornet (och skulle ångrat sig länge) om inte Alex kommit till hans räddning.(Khemiri 2003: 40) Almost had he cornered me (and he would have had to regret it for a long time) if Alex did not come over to help him.3b) Melkein hän oli saanut minut ajetuksi nurkkaan (ja olisi joutunut katumaan pitkään) kun Alex saapui pelastamaan.(Khemiri 2004: 40) Finnish and Swedish are structurally quite different languages, and their rules governing word order are not the same.In Finnish, this word order is not ungrammatical, although it is unusual.At the same time, it is not linked to any particular sociolinguistic or regional variety.On other occasions, this strategy of translating the word order creates a specific meaning.For example, placing the adverb vähän 'a little' at the beginning of the utterance usually means 'a lot' in colloquial Finnish: The translation of absolute superlatives accentuates the impression that there is, in addition to the narrator-as-a-character using non-standard language, another narratorial instance using a more standard version of Swedish, and the two become entangled.Both in the Swedish and the Finnish text, the absolute superlative is accompanied with non-standard lexical features.Absolute superlatives are not indexed as colloquial in Finnish.Therefore, the voice of the narrating "I" becomes more salient in the translation because in Finnish the absolute superlative pertains to literary registers of language use: 5a) Eftersom jag vet hur dom tänker jag använde töntigaste svennetonen.(Khemiri 2003: 165) Since I know how they think, I spoke using the nerdiest Sven tone.5b) Sen takia että tiedän mitä he ajattelevat puhuin mitä nössöimmällä sveduäänellä.(Khemiri 2004: 166) In this translation as well, a faithful rendering of the sociolinguistic dimension of the source text would have been impossible.Here, the translation strategy consists of rendering non-standard forms and constructions in the source text with non-standard forms and constructions in the same places in the target text.But since the indexical dimension of these elements is inevitably different in the source and the target text, the strategy triggers changes affecting the narrator.Thus, the image of the narrator-character is slightly different in the translation, and the distance between the narrating "I" and the experiencing "I" appears to increase.In fact, the source text emphasizes written representations of colloquial language and reads as a sort of transcript of utterances that can be imagined as spoken or thought.But the translation reads more as a traditional first-person narrative, a representation of idiosyncratic writing or thought in which standard language is combined with slang words, incorrect punctuation, and an unusual mixture of registers. Kalla det vad fan du vill Marjaneh Bakhtiari's Kalla det vad fan du vill was published in 2005 in Sweden and translated into Finnish in 2007 (see also Liisa Tiittula's and Pirkko Nuolijärvi's article in this special issue).The main character, Bahar, is 9 years old when her family moves from Iran to Malmö in Southern Sweden.The story ends when she is 24 years old.While the narrator uses the third person rather than the first person which is the default narrative mode for a coming-of-age story, the theme and the temporal convergence between the diegesis and the time of the narration at the end of the novel (Genette 1983: 233) mark the novel clearly as a Bildungsroman. Linguistic variation occupies a central position in this text.Standard Swedish, Swedish youth slang, old Scanian (Skåne) dialect and modern Scanian accent, Swedish spoken by first-generation migrants, English spoken by a Swede, Jamaican English, Spanish, and Farsi are represented in the direct speech of the characters.In addition, there are metalinguistic comments throughout the novel: in conversations between the characters (adult migrants talking with their children, adults trying to learn Swedish among themselves) and in mixed forms of speech and thought representation.Language is also a typical topic of narrative reports of speech acts (in Leech's and Short's understanding of the notion). Generally, standard Swedish is used in the narration, in the speech representation of some of the secondary characters, as well as in dialogues in which Farsi (and occasionally Arabic) is used in the diegesis.Occasionally, words or short phrases in Farsi appear in dialogue, and idiosyncratic syntactic features sometimes suggest a different speech style related to Farsi.Spelling evoking a non-standard pronunciation and eye dialect (i.e.spelling indexing non-standard language without indicating any specific non-standard pronunciation, e.g.dont vs. don't) are used to represent accent, regional dialect, and Swedish spoken by adult migrants.Youth slang characterized by English words and expressions, often vulgarisms used as ritual insults, is a typical speech pattern of the protagonist's brother and sometimes also the protagonist. A mixed strategy can be observed in the translation: some utterances are translated using the dialect-for-dialect approach at varying degrees of intensity, others by adjusting the spelling and eye dialect which evokes non-standard pronunciation and Finnish stereotypes of different accents.Morphosyntactic means are also used.Thus, "broken Swedish" is rendered by features typically associated with "broken Finnish," such as the inability to distinguish between short and long sounds and a simplified system of case endings and verb conjugation.Some of these means are exemplified in the following passage, in which Bahar's parents are attending a parent-teacher conference at Bahar's school.The teacher has just explained that Bahar is a good student and that there have not been problems with any teachers or students.Bahar's father has not understood a word, whereas her mother has only identified the words Bahar, teacher, and problem: In Swedish, the mother's speech is characterized by the inability to pronounce certain consonant clusters (pr in problem and sn in snäll [esnell in the mother's speech]), sentences without a verb, and altered vowel quality (micke instead of mycket and esnell instead of snäll).The omission of the final -t in micke appears to indicate eye dialect.The translation uses partly similar means (verb omission in Khyvin kilti).The spelling of ong-gelmia 'problems' instead of ongelmia suggests a pronunciation in which the nasal velar sound is followed by a plosive, and the spelling of khyvin 'very' instead of hyvin evokes a velar instead of glottal pronunciation of the fricative h sound.In addition, the gemination in kiltti ('nice') becomes a short consonant (kilti).This passage is also an example of the way in which standard and non-standard speech can be alternated in fictional dialogues to an amusing effect. The parody of politically correct and diversity-loving liberals is a key theme in the novel.Thus, Pernilla, the mother of Bahar's boyfriend Markus, is characterized as indulging in books written in "broken Swedish," "new Swedish," "immigrant Swedish," and different varieties of "suburban Swedish."In the following excerpt, she is using the word gus, an "immigrant Swedish" word she has learned from a book.(The word can be spelled either guss or gus.)At the same time, this passage is one of the many examples of the way in which Swedish people constantly mispronounce the main character's name (Baha instead of Bahar): Mixed categories of speech and thought representation, such as free indirect discourse, are usually marked by typographical devices such as italics.Other instances of free indirect discourse, which are under the narrator's control, appear to have an ironic purpose, as exemplified in this passage which is related to the previous example and describes Pernilla's leisure activities as chilling out.In these instances italics are not used: 8a) När Pernilla inte chillade med några sköna böcker från ghettot slogs hon för bättre cykelbanor i stan.(Bakhtiari 2005: 168) When Pernilla was not chilling out in the company of nice ghetto novels, she was fighting for better bicycle paths in the city.8b) Ja silloin kun Pernilla ei chillannut kivojen gettokirjojen seurassa, hän taisteli parempien pyöräteiden puolesta.(Bakhtiari 2011: 246) While some of the humor and the irony may disappear in this translation, there are no major shifts affecting the relations between the narrator and the various characters.This could be related to the fact that the wide array of different varieties is such a salient feature of this novel.In addition, boundaries between different varieties and their connections to the characters and the narrator are exceptionally clear.Therefore, the text appears to be not only heterolingual but also essentially polyphonic in the traditional sense of the term (Bakhtin 1986), i.e. presenting multiple voices and ideological viewpoints alongside each other. Svinalängorna Susanna Alakoski's Svinalängorna covers ten years in the life of the main character, whose family has moved from Finland to Sweden and lives in a public housing project that the locals call Swine Projects because of the social problems concentrated there.The novel was published in 2006 in Sweden and won the prestigious August prize the same year.The translation in Finnish, Sikalat, came out the following year.The novel's diegesis is bilingual: the protagonist's parents mostly speak Finnish at home and with their Finnish-speaking friends.They also speak Finnish with their children, although gradually the children start using Swedish.The mother's and especially the father's Swedish pronunciation are occasionally marked by spelling that mimics their phonetic properties.Finnish is more present in the first part of the novel and disappears as the protagonist grows up and the parents gradually become alcoholics. The retrospective first-person narrator's voice is often mixed with the voices of other characters, the mother in particular, through techniques such as free direct discourse.Finnish words and utterances are mostly swearwords used as interjections and insults appearing in the adults' speech, often in scenes in which the parents are arguing and drinking.The following example is extracted from such a scene, which takes place on Christmas Eve.The father has been drinking for several days and is now behaving violently.I have italicized the Finnish utterances in the Swedish text and the corresponding utterances in the translations.9b) Isä kirosi ja karjui pihalla.Mitä nyt tapahtuisi?Iso kivi rikkoi keskimmäisen ikkunan, tuuletusikkunan.Kukkaruukku putosi lattialle, lasinsirua sinkoili huoneeseen.Sakari parkaisi, minä aloin hädissäni itkeä.Markku ei sanonut sanaakaan mutta hänen silmät olivat pyöreät. Voi hevon vitun vittu, isä sanoi.Äiti sulki telkkarin. Voi saatanan saatana.Sitten äiti kipitti eteiseen avaamaan oven.Isä tuli sisälle kädet nyrkissä.Auta armias sitä joka sanoi jotakin tyhmää.(Alakoski 2007: 113) Finnish words are not glossed in the source text because they are quite similar in Swedish and Finnish (e.g.Finnish vittu, Swedish fitta, 'cunt', and Finnish saatana, Swedish satan, 'Satan', both used as interjections).Besides, these words and the interjection voi 'oh' have previously appeared several times in the adult Finnish-speaking characters' speech, starting from the first paragraph of the novel.This passage also exemplifies the shifts affecting categories of speech and thought representation in the translation.Thus, the fact that the father speaks Finnish suffices to distinguish these voices in the source text.In the Finnish text, this difference disappears, which is probably the reason why the father's free direct speech has been transformed into direct speech.In passages preceding this one, free direct speech is attributed mostly to the mother: if free direct speech were maintained, some of this profanity would potentially be attributed to the mother. Profanity and short sentences, often consisting of one single clause in the speech of the characters, give the text a colloquial flavor.This may have motivated enhanced colloquial marking of the narrator in the translation (for example the omission of the possessive suffix -nsä in (hänen) silmät ['his eyes'] in the previous example).But structural differences between Finnish and Swedish also play a role.Indeed, in addition to non-normative punctuation, the translation systematically uses the passive form for all first-person plural forms of verbs, which is the norm in most varieties of colloquial Finnish.Using this device to render the text less standard is a relatively neutral choice, for it is not linked to any particular dialect or sociolect.I have italicized the first-person plural forms in the following example which also shows how the translation combines colloquial passive verbal forms with lexical (ääreen) and morphological (possessive suffix -mme in viereemme) features typical of literary, written language: 10a) Vi tog varsin sovsäck och satte oss på dem vid eldstaden.Vi ställde väskorna med choklad, godis, smörgåsar, cigaretter och tårta bredvid oss.(Alakoski 2006: 258) We took each our sleeping bags and sat beside the campfire.We placed our bags, in which we had chocolate, candy, sandwiches, cigarettes, and cake, next to us.10b) Me otettiin makuupussit ja mentiin nuotion ääreen istumaan.Me pantiin viereemme laukut, joissa suklaa, karkit, voileivät, tupakat ja täytekakku olivat.(Alakoski 2007: 280) The narrator uses first-person plural forms frequently.They are rare in dialogue.Thus, while the first-person narration and the dialoguewith the exception of the vulgarisms mentioned aboveare not morphosyntactically marked as colloquial in the source text, the narration is less standard than the dialogue in the translation.This outcome is accentuated by the fact that, in Finnish, the difference between formal and casual registers is largely morphological.Therefore, the distance between the two instances of "I", the narrating "I" and the experiencing "I", appears to be less marked in the translation. From voices to focalization Heterolingualism seems to be the norm for "migrant," "minor," and "minority" literatures such as the French "Beur novel" (Hargreaves 1990) and "Black English writing" (Buzelin 2006).But each constellation of heterolingualism is unique.In the previous section, I analyzed the ways in which language variation and its translation affect the relations between the characters and the narrator in the four novels under scrutiny.The unique nature of each novel explains the somewhat contradictory results of the analysis.In this section, I will extend the analysis of voices towards focalization or point of view.Although identifying the instances to whom speech and thought in the novel can be attributed (voices) differs from identifying the instances who see (focalization or point of view), the two are linked. For literary translators, one of the most challenging aspects of their work is to translate the "feel" of the novel.According to Simpson (1993: 7), that feel is created essentially through point of view, i.e. focalization (a term commonly used since the publication of Genette's Figures III in 1972).Most narratologists today operate within a two-ended spectrum of focalization: internal and external (Fleischman 1990: 219), although more complex categorizations have also been presented (e.g. Simpson 1993).In external focalization, the only information that is available is related to the immediate spectacle of the scene, and no information regarding the thoughts of any of the characters is given.In internal focalization, which is a typical feature of the modern psychological novel, the narrator knows as much as the character and reveals only things that the focalized character knows or perceives (Genette 1972(Genette , 1983).Genette's (1972: 206-211) concept of focalization is based on the difference between narrative mode (who is the personage whose point of view orients the narration, who sees?) and voice (who is the narrator, who speaks?).The two are entangled: even though point of view itself is nonverbal, it must be conveyed through linguistic means (Fleischman 1990: 216).For example, Rimmon-Kenan (1983: 72-73) notes that events may be reported from the point of view of the child in a first-person narrative, but the vocabulary may reveal that the narrator is an adult.Similarly, Fleischman (1990: 219-235) observes that if the temporal and psychological distance between the narrating "I" and the experiencing "I" is minimal (which is the case in Camus' The Stranger), or if the perception through which the story is rendered is that of the narrating "I" rather than the experiencing "I" (which is the case of marked focalization in Proust's In Search for Lost Time), focalization can be external even in first-person narratives.According to Fleischman, tense-aspectual features are important means in creating such marked focalization in first-person narratives.Hence, while firstperson narratives mimic confessions and (pseudo)-autobiographies (Fleischman 1990: 234;Fludernik 1996: 90), this does not automatically imply internal focalization.Genette (1972: 194, 209-210, 214, 236 and1983: 71) actually argues that firstperson narratives are naturally inclined towards external focalization, whereas third-person narratives are predisposed to internal focalization.This is because third-person narrators have a natural tendency to display discretion and respect towards their characters.In first-person narratives, conversely, the narrator has no duty of discretion towards him or herself: the only duty of respect concerns his or her current information as a narrator rather than past information as the protagonist.Consequently, although the narrator and the hero are identical in first-person narratives, pure internal focalization can only be found in interior monologue.In third-person narratives, free indirect discourse is the tool par excellence through which internal focalization is expressed.Indeed, interior monologue and free indirect discourse are functionally analogous (Fleischman 1990: 234). As Klinger (2015) has shown, linguistic hybridity and the relation between standard and non-standard usage are important components in the coconstruction of focalization.Thus, the focalization shifts that take place in the translations of the four novels analyzed in this article can be explained by the continuum from standard to non-standard language.This continuum is intervowen with other continua: the continuum between spoken and written language and the spectrum ranging from non-marked variation to variation that is strongly marked regionally and/or socially.In Niemi's Popular Music from Vittula and Alakoski's Svinalängorna, the fact that the foreign language of the source text is the language of the target text renders the translation process more complex, while at the same time increasing the risk of the indexical relation of Otherness becoming that of Sameness in the translation (cf. Grutman 2006: 22).For example, in Niemi's novel, the foreign language of the source text corresponds to a variety identified as a regional dialect of Finnish in the translation.This dialect has quite an extensive history of literary representation in the works of writers such as Timo K. Mukka and Rosa Liksom.In the translation, the representation of dialectal usage spreads towards utterances that are not marked as foreign in the source text.The same phenomenon occurs in the translation of Alakoski's novel, but in this case the representation of non-standard speech affects the narration rather than the direct speech of the characters.The narrator's language use is only slightly colloquial in the source text.But the abundant use of mixed forms of speech and thought representation, free direct discourse and interior monologue in particular (cf.example 9), often consisting of vulgarisms, increases the colloquial flavor of the narration.This may have motivated the translation's more pronounced representation of colloquial language in the narration.Another reason may reside within the narrative structure of the novel.The final temporal convergence is projected into the past, as if the narrator were a teenager.In the translation of Niemi's novel, the accentuated dialect representation of the main character's speech therefore increases the distance between the narrating "I" and the experiencing "I", which appears to suggest a (very) minor shift towards internal focalization.In the translation of Alakoski's novel, however, this distance decreases, and the narrating "I" and the experiencing "I" seem to converge, which indicates increased internal focalization on the main character and decreased internal focalization on other characters. Among the novels analyzed here, the distance between the narrator and the protagonist is most pronounced in Bakhtiari's novel Kalla det vad fan du vill, in which the third-person narrator uses mostly standard language, whereas the speech of most characters is marked as non-standard in varying degrees.Clear boundaries between different categories of speech and thought representation probably explain why there are no significant shifts affecting focalization in the translation.Although the presence of different languages and varieties is particularly strong in Bakhtiari's novel, these are clearly marked both in the source text and the translation: almost every character's speech is nonstandard.As a result, both in the original and in translation, Bakhtiari's novel reads as a highly polyphonic text in the Bakhtinian understanding of the notion.A translation strategy dismissing this plurality of voices would have completely altered the novel's narrative framework.In Khemiri's novel Ett öga rött, the distance between the narrator and the protagonist is minimal, and the narration occasionally oscillates towards interior monologue.In the translation of Khemiri's novel, in contrast, the distance between the two instances of "I" becomes more accentuated because the syntax of the translation is standard (although at times idiosyncratic): mostly lexical and very few other means are used to render the non-standard quality of the narration.While the focalization of the source text is mostly internal, with the exception of a few instances in which the diary writer refers to himself in the third person, the translation oscillates between external and internal focalization. These findings are consistent with previous investigations of the translation of speech and thought representation techniques and point of view: focalization or point of view and the distribution of speech and thought representation techniques are often altered in translation, especially when mixed types of discourse are present.Structural differences between languages may explain such shifts (see e.g.Gallagher 2001;Rouhiainen 2001;Taivalkoski-Shilov 2006;Kuusi 2006;Bosseaux 2007: 60-61).Translation universals, such as explicitation, simplification, normalization/conservatism, leveling out, sourcelanguage interference, untypical collocations, and underrepresentation of unique target-language elements (see e.g.Baker 1996;Tirkkonen-Condit 2004;Mauranen 2006) have also been presented as potential explanations.However, scholars have criticized translation universals for failing to take into account the contingency of translation norms (Paloposki 2002).Thus, literary and translational norms have been proposed as other possible explanations for shifts affecting focalization and speech and thought representation in translations (Toury 1980: 116;Taivalkoski-Shilov 2006).Indeed, since the shifts identified in the translations of the four novels are not systematic, it is necessary to continue the analysis of these shifts within a larger framework of the social context of translation.This will be the topic of the next section. Authenticity and boundaries In their extensive overview of colloquial language in Finnish literature, Tiittula and Nuolijärvi (2014: 143, 233) list three tendencies in contemporary Finnish literature (see also Tiittula's and Nuolijärvi's article in this special issue): the normalization of the representation of spoken language in both character and narrator discourse; increased mixing of different registers and increased presence of standard language in the characters' speech; and increased representation of spoken language in general, with the representation of dialect becoming more "authentic."Colloquial language and slang are much more prevalent in youth literature, and an entire novel, narrated by a young narrator, may be written in a colloquial style that cannot be linked to a particular regional dialect. Interviews with five editors of translated fiction revealed that their attitudes towards colloquial and dialectal language in literature varied from total tolerance to strong reluctance (ibid.255).Quoting interviews with the translators and pieces written by them in professional publications, Tiittula and Nuolijärvi also provide information about the choices made by the translators of the four novels analyzed in this article.Thus, after discussion with the editor, the translator of Popular Music from Vittula decided to add dialectal and other colloquial features such as repetitions to the translation, because otherwise the translation would not have had the same effect as the source text (ibid.364).As for Alakoski's novel, they note that while the source text is bilingual, the translation became monolingual (ibid. 369-371).According to the translator of Khemiri's novel (ibid.377), using mostly standard Finnish and only lexical means to render the nonstandard quality of language was an inevitable choice because, according to the translator, there is no "equivalent immigrant slang" in Finnish.The translator of Kalla det vad fan du vill said that she was cautious with dialects in the translation, for readers would have found a faithful translation "too overwhelming."However, she tried to familiarize herself with "immigrant Finnish" by watching television shows in which there were migrants and making lists of the typical features of their speech and "grammatical errors" (ibid.373-376). While there are studies discussing the possible existence of new multiethnic youth varieties of Finnish, for example in the Eastern suburbs of Helsinki (Lehtonen 2011), "immigrant Finnish dialects" have not appeared in Finnish literature, which has been a disappointment to some literary critics.At the same time, there is a long tradition of literary representation of other varieties in Finnish literature.Thus, a tradition of literary representation of Northwestern dialects of Finnish spoken on both banks of the Torne River Valley and different traditions of the representation of other dialects and sociolects, including slang, were available to the translators of Niemi's and Alakoski's novels.Both novels also depict the loss of the mother tongue.Language attrition, language shift, and broken linguistic identity were also key themes in Antti Jalava's 1980 Asfaltblomman, the first major "migrant novel" in Sweden.It played an important role in introducing the social and linguistic reality of Finnish immigration to Sweden in the 1960s and 1970s to general discussion, including themes such as language and identity, forced assimilation by the school system, and discrimination based on ethnic origin.Much of this discussion revolved around the loaded term of semilingualism, i.e. the alleged lack and loss of native language among migrant children.Indeed, Jalava's 1980 Asfaltblomman provides several accounts of semilingualism among Finnish migrants in Sweden, both in reported speech, mixed forms of speech and thought representation, and narratorial discourse.Today, this discussion is over, as is mass migration of Finns to Sweden.Consequently, while Alakoski's novel is, among other things, a novel about Finnish immigration to Sweden, critics have preferred to stress its role as a portrait of childhood destroyed by alcoholism, domestic violence, poverty, and shame.Khemiri's and Bakhtiari's novels, on the other hand, have been read as novels about immigration and linguistic identity.Rather than portraits of the loss of a language, these novels read as celebrations of heterolingualism and the linguistic creativity resulting from language contact.When Khemiri's novel was published, some criticized the blatant deviations from the norms of written language.Most literary critics, however, welcomed the novel with open arms.The author himself noted that he could not have written the story of Halim, the main character, without using "his language," and linked the debate about the novel's language to the changing faces of Swedish identity and the issue of authenticity (Gröndahl 2007: 27;Bengtsson 2008: 3, 19).And while the author himself has argued that Halim's language is a literary construct and an idiolect rather than a discreete sociolect (af Kleen 2006), the novel was widely interpreted as a social documentary and authentic testimony and became the representative par excellence of the "Swedish immigrant novel" written in "immigrant Swedish."The sociolect identified in Khemiri's novel has been called, among others, förortssvenska, 'suburb Swedish' (the word suburb referring to areas with a high concentration of social housing units), Rinkebysvenska (from the name of a suburb in Stockholm that has become a paragon of districts with a high concentration of migrants and various linked social phenomena), kebabsvenska (referring to the fact that kebab joints are typically run by Middle Eastern migrants), miljonsvenska (referring to the 1960s project of building 1 million new dwellings and the fact that many of these high-rise concrete buildings have been populated by migrants), and blattesvenska (referring to a derogatory term for 'migrant').These alleged varieties have also been associated with Bakhtiari's novel.Thus, in Swedish literary culture, the emergence of the "migrant novel" as a new literary genre has been widely acknowledged, and many critics have linked it to identity politics and immigration policy in general, including debates about the changing notions of culture, language, and ethnicity (Gröndahl 2007: 21).However, Gröndahl (ibid. 27) also argues that while Niemi proudly presents himself as a representative of the Torne River Valley, authors like Khemiri do not want to appear as representatives of migrants.Indeed, as Kongslien (2013: 126) notes, writers such as Greek-born Theodor Kallifatides, who has published over 30 books in Swedish, have expressed indignation over the fact that they are still regarded as "immigrant writers."Similar phenomena have been observed in other contexts such as French "Beur writing": when these novels started to appear, academics and librarians alike were not sure whether they should be catalogued as French or North African literature (Hargreaves 1996, Aitsiselmi 1999).Nilsson (2010Nilsson ( , 2012Nilsson ( and 2013) also notes that ethnicity has been the central focus in discussions about Swedish "immigrant and minority literature," both within and outside the Academe.According to him (2012), Khemiri's and Bakhtiari's novels are best understood as a critical dialogue about Sweden as a multicultural society and a satire and critique of "Swedish immigrant literature," including the language use observed within it.This viewpoint is based on the argument that focusing on ethnicity "produces othering and exoticizing" and "contributes to the racialization of non-Swedish ethnicities" (Nilsson 2010: 201, 208-216;2013: 47-48; see also Behschnitt 2013: 194-195).Nilsson's arguments are largely based on Amodeo's, Mohnike's, and Beschnitt's observations on Swedish and German migrant writing (Amodeo 1996;Behschnitt & Mohnike 2006;Mohnike 2007;Behschnitt 2010).Thus, "immigrant literature" is a discursive category in which the production and reception of texts relies largely on paratextual facts such as the writer's foreign-sounding name (see also Tuomarla 2013: 196).For that reason, "migrant writers" are expected to expose their immigrant identity and experience in their texts, which are subsequently read as a source of information and as biography.Khemiri's and Bakhtiari's novels are central representatives of this genre in Sweden.In addition to paratextuality, Nilsson (2010: 203;2013: 47-48) argues that authenticity can be constructed thematically by representing an "immigrant problematic" and stylistically through language use, interpreted as representing "immigrant Swedish."Overall, stereotypes rather than facts function as tools in the discursive construction of this genre.However, the criticism that both critics and writers have expressed against the discursive construction of "immigrant literature" in Sweden has resulted in the "death" of the immigrant writer, according to Nilsson (2013). Along these lines, authenticity emerges as a key concept defining the genre of "migrant novels" and a key problem faced by the translators in the representation of sociolinguistic variation.Authenticity has also been one of the cornerstones of sociolinguistics and its predecessors: the search for "uncorrupt," "original" native speakers of dialects and sociolects pertaining to specific groups has oriented dialectologists and sociolinguists of the variationist paradigm alike (Coupland 2010).However, in recent sociolinguistic research, authenticity and related concepts such as the native speaker and the boundaries between different languages and language varieties have been questioned (see e.g.Eckert 2003;Bucholz 2003;Coupland 2003Coupland , 2010Coupland and 2014;;Heller 2014).This contestation is rooted in the criticism directed against formalist notions of language in sociolinguistics (Cameron 1990).Indeed, sociolinguistics and linguistic anthropology have rediscovered Bakhtin's (1986) and Voloshinov's (1986) ideas, according to which language is essentially a dynamic process and a hybrid construction rather than a fixed entity. Interestingly, the rediscovery of hybridity in sociolinguistics appears to parallel the discovery of hybridity and the contestation of boundaries in postcolonial and poststructuralist translation theory.Postcolonial and poststructuralist theory consider the "original" to be an impossible translation (e.g.Johnson 1985), on the one hand, and writing and translation a fecund site of creation, on the other hand.Hybridity is a key concept in such approaches.For example, Bhabha (1996: 58), echoing Bakhtin's views on hybridity and doubleness in language, argues that "discursive doubleness" may open up a space capable of engendering a new "speech act" (in an understanding of the term that differs from speech act theory), including a new site for "writing the nation" (Bhabha 1990: 297).Similarly, Gentzler (2002: 217) argues that, in poststructural translation, "hybrid sites of new meaning open up; new borders are encountered and crossed, often with surprisingly creative results."Therefore, in a postcolonial context, translation implies a reflection about the nature, role, and position of the translator and their readership (Buzelin 2006: 110), for translation does not just happen between cultures, it constitutes them (Gentzler 2008: 5). While multilingualism in literature and translation is not a new phenomenon (see e.g.Grutman 1998), Meylaerts (2006: 1) argues that the recent focus on multilingualism in translation studies is related to the fact that "its modalities have changed due to recent technological, political, and other developments."Buzelin (2006: 92), in contrast, thinks that Bakhtinian and postcolonial theories have prompted this shift.Thus, scholars, critics, and editors have started to pay attention to hybridity, multilingualism, and orality.This can be explained by the fact that there are new literary markets responsive to hybridity and more authors from formerly colonized areas where linguistic and ethnic hybridity is commonplace. Sociolinguists know that in terms of global language practices, hybridity is the rule rather than an exception (Blommaert 2006: 169).Nevertheless, authenticity, strict boundaries, and the concept of the native speaker are still the cornerstones of language professions, and of translation and interpreting in particular: hybridity may be celebrated, but it is difficult to escape from boundaries.The societal and scholarly discussion about the "Swedish migrant novel" focuses on the novelty constituted by hybridity.This is visible not only in the personal identity of the characters and the author but also in language use.Nonetheless, this very celebration of hybridity is accompanied by the search for, and identification of, a distinct variety of language purportedly used by migrants.A more detailed analysis of critical approaches to authenticity may explain this mechanism. Critical positions towards authenticity have centered on the links between authenticity and essentialism, i.e. "the position that the attributes and behavior of socially defined groups can be determined and explained by reference to cultural and/or biological characteristics believed to be inherent of that group" (Bucholz 2003: 400).Such essentialism is linked to the concepts of iconization, recursiveness, and erasure introduced by Gal andIrvine (1995, 2000).Thus, linguistic features associated with a group and perceived as differences are interpreted as being iconic of the identities of the speakers.A mechanism of selection then emerges, through which certain distinctions and oppositions are maintained and created and others dismissed.Language ideologies can be identified as a major force governing this process.Other theorizations of this process include Agha's (2003) concept of enregisterment, a phenomenon by which a way of using language is distinguished among other usages and becomes a register that is socially recognized. As a result of such essentializing practices, communicative repertoires become indexically linked to repertoires of identities (Cameron 2003: 448-449;Blommaert 2006: 167-168).Such processes explain how, for example, African American Vernacular English has been correlated mostly with the socio-economic features of the speaker population, rather than with the linguistic properties of this variety (Mufwene 2001: 23).But as Buzelin (2006: 96-97) and Määttä (2004) observe, literature is not immune to such processes.In fact, the "Swedish migrant novel" is based on a process in which features associated with migrant usages of language have become enregistered and iconized as a socially recognized language variety.Stroud (2004) and Milani (2010) have analyzed naming practices linked to the emergence of this variety as ideological and imaginary constructions which function as tools to create boundaries between that which is ethnically Swedish and that which is not.Consequently, while hybridity is almost invariably mentioned by literary critics and academic scholars studying these phenomena, and while some of these novels have also been read as "revolutionary speech acts" (e.g.Lacatus 2007), the variety identified by labels such as "migrant Swedish" is the tool through which this hybridity and the transformative acts attached to it are materialized. On a personal level, multilingualism is best understood as a hybrid repertoire shaped by a life-long linguistic trajectory, rather than as a repository of stable, bounded entities composing a plurality of monolingual varieties.As a matter of fact, heterolingualism is a feature of voices and speakers rather than a feature of languages (Blommaert 2006: 167-173).Therefore, while identity is best understood as a position and a process constructed within representation (Hall 1996: 2, 4;1997: 33), identity becomes fixed when it is inscribed in the language use of a novel, and when heterolingualism and hybridity are linked to particular language varieties. Concluding remarks As Folkart (1991: 433) remarks, the representation of non-standard language can be an internal necessity for the creator of the source text.Such emotional links constitute a component of the "feel" of the novel that is difficult to translate.Nonetheless, Lefevere (2000: 240) has identified misunderstandings and misconceptions, or refractions, as a major explanation for the influence and exposure that a writer's work may gain: writers and their work "are refracted through a certain spectrum" and interpreted against a given background.As my analysis has shown, shifts affecting speech and thought representation and focalization, occasioned by the translation process, can be attributed to a wide, seemingly unsystematic array of textual factors.Refractions related to larger contextual factors thus emerge as a potential explanation for the "feel" of a text and its translation. A coming-of-age story written by a person who has a name that can be associated with a minority or migrant population is typically read as a portrait and documentary of the minority or migrant experience.This process includes the search for features representing authentic minority or migrant language use in the text.Furthermore, the text is read as an autobiography in which not only the instances of the character and the narrator but also that of the author are conflated, for the author is interpreted as "knowing" the people and the environment of the novel's diegesis in precisely the same way the narrator does (cf. Genette 1972: 226;Cohn 2000;Gavins 2007).Even in third-person narratives such as Bakhtiari's novel, the narrator and the author alike are interpreted as being present in the story and telling their own story, as if the third person were only a masquerade for the first person.Therefore, while the ethnicminority perspective is just one of the many possible dimensions on which the novel's interpretation could be anchored, it invariably becomes the predominant one: we only identify one voice and one central perspective to which "linguistically subjective elements and constructions are referred" (Banfield 1991: 23-24).Paratextual elements such as the author's name, the book cover, and the title of the book strengthen this interpretation.For example, Bakhtiari's book is translated as Mistään kotosin ('Coming from nowhere') in Finnish.The cover of the Swedish paperback depicts a detail from a gray urban landscape with the book title in red and ornaments above and below.The cover of the Finnish paperback, in contrast, portrays a dark-haired young woman wearing a headscarf or a veil to cover the lower part of her face, and the title is composed of multicolored letters, some of which show details from Oriental rugs.Paratextual factors of this kind accentuate the willing suspension of disbelief (Stearns 2014), pushing the readers to suspend their disbelief regarding the fictional nature of the story, its characters, and the language varieties used in the novel. In the sense of multiple possible voices representing multiple possible contexts and interpretations, polyphony is a quintessential feature of language.But we all interpret utterances, texts, and all language use differentlydifferent readings and misunderstandings are part of our everyday life.Divergent readings are a particularly salient feature of written communication because the tools for creating shared contexts between the producer of the text and its recipients are limited.Nevertheless, certain interpretations and representations are more salient than others because they are culturally or discursively more prominentcertain voices are recognized whereas others are not heard or remain secondary.I argue that alongside cultural or discursive prominence, this selection of voices is related to our cognitive limitations: prototypical interpretations and representations emerge because otherwise we would not be able to make sense of the chaotic world around us.Communication would be quite complicated if we had to consider all possible interpretations equally and check their accuracy, i.e. understand all indexical meanings related to different voices and the contexts they activateand act accordingly. The reading of a literary text is subject to the same limitations as any interpretation of language use.And yet, literary scholars, literary linguists, translation scholars, and translators probably have more sophisticated skills for deciphering polyphony than most other readers.For example, we may be able to distinguish the translator's voice and discursive presence (Hermans 1996: 27).But we are not able to distinguish the indexical complexities of this and other voices in their entirety.This article is not an exception, although its aim is to provide a critical analysis taking into account as many perspectives as possible. The difficulty of deciphering polyphony (in the wider sense of the term) and the predominance of certain interpretations over others are also related to the fact that boundaries and their inevitable corollarystandardsare meaningful to people.What is at stake as a quintessential criterion for genre membership in the "ethnic coming-of-age story" is the presence of language variation and distinct language varieties related to ethnicity.This is why the translator is under considerable pressure to "pass" as an authentic reproducer of the varieties evoked in the source text. Conceptualizing language and language varieties as bounded entities is perhaps one of the most fundamental language ideologies, for it is linked to the construction and recognition of identities.The identity connection is perhaps also the reason why we do not easily perceive these language ideologies and the practices or discourses through which they are reified.In fact, I argue that due to our cognitive limitations and inability to process and interpret heteroglossia and polyphony in all their complexity, we inadvertently consider languages and language varieties as entities separated by boundaries.Such boundariescreated by differences and distinctionscan be conceived both as a condition for the commodification of language and as a consequence of that commodification.Thus, one cannot write, publish, and sell a novel centering on the coming-of-age process of a migrant or a representative of an ethnic or national minorityone cannot occupy a subject position from which this particular genre pertaining to the discourse of hybridity potentially emanatesunless one is a member of such a group.In other words, the subject position of the real world and the position formed by the novel's diegesis have to be identical (cf.Simpson 1993: 32). 9a) Pappa svor och vrålade ute på gården.Vad skulle hända nu? En stor sten krossade vår mittruta, vädringsfönstret.Blomkrukan for ner på golvet, glassplittret flög ut i rummet.Sakari skrek till, jag började panikgråta.Markku sa inte ett ord men han hade jättestora ögon.Voi hevon vittun (sic) vittu, sa pappa.Mamma stängde av teven.Voi saatanan saatana.Sedan småsprang mamma till hallen och låste upp dörren.Pappa kom i med knutna nävar.Fan ta den som sa något dumt nu.(Alakoski 2006: 103-104) Daddy swore and was shouting on in the courtyard.What would happen now?A big stone hit our middle window, the ventilation window.The flowerpot fell onto the floor, debris of broken glass flew into the room.Sakari screamed, I started to cry in panic.Markku did not say a word but his eyes were wide open.Voi hevon vitun vittu, Dad said.Mom turned off the tv.Voi saatanan saatana.Then mom rushed into the hallway to open the door.Dad came in with fisted hands.God help the one who said something stupid now.
2018-12-13T02:44:55.022Z
2016-08-29T00:00:00.000
{ "year": 2016, "sha1": "6c24026337dc9f6bd615f7734e33412dac05d7c1", "oa_license": "CCBY", "oa_url": "https://journals.linguistik.de/ijll/article/download/76/31", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8df4a7c0915dcc6e437d38db2a5c224e48782e77", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
250227133
pes2o/s2orc
v3-fos-license
Chikungunya Virus Asian Lineage Infection in the Amazon Region Is Maintained by Asiatic and Caribbean-Introduced Variants The simultaneous transmission of two lineages of the chikungunya virus (CHIKV) was discovered after the pathogen’s initial arrival in Brazil. In Oiapoque (Amapá state, north Brazil), the Asian lineage (CHIKV-Asian) was discovered, while in Bahia state, the East-Central-South-African lineage (CHIKV-ECSA) was discovered (northeast Brazil). Since then, the CHIKV-Asian lineage has been restricted to the Amazon region (mostly in the state of Amapá), whereas the ECSA lineage has expanded across the country. Despite the fact that the Asian lineage was already present in the Amazon region, the ECSA lineage brought from the northeast caused a large outbreak in the Amazonian state of Roraima (north Brazil) in 2017. Here, CHIKV spread in the Amazon region was studied by a Zika–Dengue–Chikungunya PCR assay in 824 serum samples collected between 2013 and 2016 from individuals with symptoms of viral infection in the Amapá state. We found 11 samples positive for CHIKV-Asian, and, from these samples, we were able to retrieve 10 full-length viral genomes. A comprehensive phylogenetic study revealed that nine CHIKV sequences came from a local transmission cluster related to Caribbean strains, whereas one sequence was related to sequences from the Philippines. These findings imply that CHIKV spread in different ways in Roraima and Amapá, despite the fact that both states had similar climatic circumstances and mosquito vector frequencies. Introduction The Chikungunya virus (CHIKV) is transmitted mainly by two vectors, the Aedes aegypti and Aedes albopictus mosquitoes, which are widely distributed in different continents, including America [1][2][3], favoring its dissemination into new areas and contributing to its emergence, re-emergence, and outbreaks in different parts of the world [4]. Humans, once infected, can develop the disease, the clinical manifestations of which include symptoms such as fever, headache, nausea, fatigue, myalgia, arthralgia, and rash [5]. The recovery time ranges from weeks to years. In general, clinical severity is associated with increasing age [6]. However, there are cases in which some patients (3 to 28%) do not manifest clinical symptoms [7]. CHIKV belongs to the Togaviridae family and is classified as an Alphavirus. The genome is made up of two open reading frames (ORFs) that encode two polyproteins that are then processed into four non-structural proteins (nsP1-nsP4) and five structural proteins (capsid, E3, E2, 6K, and E1) [8]. A phylogenetic study revealed that CHIKV is divided into four genotypes or lineages: Asian, Indian Ocean (IOL), East, Central, and South Africa (ECSA), and West Africa (WA) [9]. Originally, CHIKV was isolated in 1952 on the Makonde plateau in Tanzania [10,11]. Later, it was detected in different locations in Africa, Asia, Europe, and in the Indian and Pacific oceans [12]. In South America, CHIKV was first introduced in late 2013 from the Caribbean islands [13]. Since then, it has spread to different countries in South America. In Brazil, the first detection of autochthonous CHIKV cases was confirmed in September 2014 through two independent introductions [14]. The Asian/Caribbean lineage (CHIKV-Asian), first identified in September 2014 in the Oiapoque municipality, state of Amapá, followed by the East-Central-South African (CHIKV-ECSA) lineage, which was introduced in Brazil in the city of Feira-de-Santana, state of Bahia [14,15]. Since then, several autochthonous cases have been described [5,16]. Likewise, several imported cases were reported [17,18]. In Roraima state, in the Amazon region, CHIKV-Asian was initially introduced in 2015, and a large CHIKV outbreak occurred in 2017, caused by an ECSA-lineage [18]. The Brazil-Guyana transboundary zone (ZTBG) is in the north of the South American continent, with a hot and humid climate. The ZTBG is home to four municipalities: Saint-Georges de L'Oyapoque, Camopi, and Ouanary on the Guyanese side, and Oiapoque on the Brazilian side. The municipality of Oiapoque is located 590 kilometers from Macapá, the state capital of Amapá (Brazil). The municipality of Oiapoque, with a population of 28,000 people, is ZTBG's largest and most important commercial center, with considerable movement of people between Brazil and Guyana. Since the arrival of the CHIKV in South America from the Caribbean Islands, French Guyana was one of the countries most affected by the epidemic [19]; promptly, the municipality of Oiapoque also suffered from the Chikungunya epidemic (with 1541 cases between 2014 and 2016), thus demonstrating that this border region is an important gateway for the emergence of new viral epidemics. In this study, we report ten near-full length CHIKV sequences detected in individuals from Amapá, north Brazil. All of these sequences are from the CHIKV-Asian lineage, and nine of them form a single monophyletic clade, implying a local transmission cluster. One of the Amapá sequences was found to be closely related to strains from the Philippines, implying that CHIKV-Asian transmission in the Amazonian region has multiple strains originated from distinct geographical locations. Sample Collection A total number of 824 serum samples from individuals presenting with symptoms compatible with arbovirus infection were obtained from the LACEN (Laboratório Central) of the state of Amapá, of which 96, 240, 283, and 205 samples were available from the years 2013, 2014, 2015, and 2016, respectively. Additionally, epidemiological data, such as the date of symptom onset (i.e., fever with headache, myalgia, arthralgia, weakness, etc.), the date of sample collection, sex, age, and municipality of residence were collected for molecular diagnostics. Sample Processing and Quantitative Real-Time RT-PCR A Roche MagNA Pure 2.0 automatic nucleic acid extraction machine was used to extract viral RNA from the samples (MagNA Pure LC instrument, Roche Applied Science, Indianapolis, IN, USA.). The extraction reagent kits were from Roche's MagNA Pure LC Total Nucleic Acid Isolation Kit High Performance, Version 8, and the methodology used was that stated in the kit instructions. Each sample was extracted with a volume of 200 µL of blood plasma; if the sample did not have a total volume of 200 µL, PBS was added to the sample and the contents of the sample tube were gently stirred with a pipette. Each sample had a final elution volume of 60 µL. The samples were kept in a −80 • C freezer after extraction. After that, the samples were put through a sequence of qPCR tests. To begin, all samples were subjected to the BIORAD (Bio-Rad Laboratories, Inc.; Hercules, California) ZDC (Zika, Dengue, Chikungunya-ZDC-PCR) Multiplex qPCR Assay. The assay was carried out according to the manufacturer's protocol, which is included with the kit, using 5 µL of extracted RNA. The samples that tested positive for the ZDC assay were sent to NGS for analysis. Library Preparation and Next-Generation Sequencing The preparation of the next-generation sequencing library was performed as described by da Costa [20]. To eliminate host and bacterial cellular debris, 0.3 mL of the sample was centrifuged at 12,000× g for 5 min at 8 • C and the supernatant was filtered through a 0.45 M Millipore filter (Billerica, MA, USA). The filtrate was then treated for 1.5 h at 37 • C with a mixture of nucleases to decrease nucleic acids, keeping only the infectious viral nucleic acids, which are protected from digestion by their capsids. The ZR & ZR-96 DNA/RNA kits were then used to extract total nucleic acid (Zymo Research, Irvine, CA, USA). The elution volume was 0.05 mL of nuclease-free water, according to the manufacturer's guidelines. The SuperScript III kit (Life Technologies, Grand Island, NY, USA) was used to make the first strand of cDNA, and the Klenow FRAGMENT kit was used to make the second strand (New England Biolabs, Ipswich, MA, USA). After that, the final product was sent to Nextera XT (Illumina, San Diego, CA, USA) for library preparation. Illumina software was used to demultiplex the paired-end, 300 pb sequences generated by MiSeq. The data was then run via the Blood Systems Research Institute's "virus discovery" pipeline on supercomputers (Deng et al., 2015). Bowtie2 was used to filter the sequences, excluding human, bacterial, and fungal sequences. SOAPdenovo2, Abyss, meta-Velvet, CAP3, Mira, and SPADES algorithms were later used to reconstruct viral genomes. BLASTx and BLASTn were used to analyze the contigs. Using Geneious R8, the quality and coverage of the entire or partial genomes were assessed (Biomatters, San Francisco, CA, USA). Phylogenetic and Bayesian Analysis Firstly, we submitted the sequences generated in this study to a genotyping analysis using the phylogenetic arbovirus subtyping tool, available at http://genomedetective.com/ app/typingtool/chikungunya (accessed on 8 May 2022) [21]. To investigate the phylodynamics of CHIKV in the state of Amapá, we downloaded all sequences assigned as CHIKV from GenBank (n = 6232), submitted them to Genome detective, and selected only Asian and Caribbean genotype sequences out of the remaining 1034 sequences. This dataset, plus Brazilian sequences, was aligned using MAFFT [22] and edited using AliView [23]. Partial, poorly aligned, and identical sequences were removed from the dataset. A final dataset, with 257 sequences, was used for phylogenetic analysis. We estimated maximumlikelihood phylogenies in PhyML [24] by using the best-fit model of nucleotide substitution, as indicated by the jModelTest application [25]. To investigate the temporal signal in our CHIKV dataset, we regressed root-to-tip genetic distances from this ML tree against sample collection dates using TempEst v 1.5.1. Time-scaled phylogenetic trees were inferred by using the BEAST package v.1.10.4 [26]. We employed a model selection analysis using both path-sampling and stepping stone models to estimate the most appropriate model combination for Bayesian phylogenetic analysis, and the best fitting model was the TN- plus Gamma correction substitution model with a Bayesian skyline coalescent model. Phylogeographic analyses were applied as an asymmetric model of location transitioning coupled with the Bayesian stochastic search variable selection (BSSVS) procedure. We complemented this analysis with Markov jump estimation that counts location transitions per unit time along the tree. The Monte Carlo Markov chains ran long enough to ensure stationarity and adequate effective sample size (ESS) of >200. A final maximum clade credibility tree was generated by summarizing the results of Bayesian phylogenetic inference and viewed in FigTree software [20,27]. PCR Assay We processed 824 samples using the ZDC-PCR assay, of which 788 were negative and 36 samples [20,27]. Regarding the 11 CHIKV positive samples, the average RT-PCR cycle threshold was 28.02 (ranging from 20.2 to 37.35), and was from patients with an average age of 32 years of age, of which the majority (67.6%) were female (Table 1). Location of Sample Collection The patients lived in the cities of Macapá (capital of the state of Amapá), Laranjal do Jari, and Porto Grande Municipalities (Figure 1). At the time of sample collection, patients had the following symptoms: fever in the last seven days, pain in the muscle, localized exanthema, and cough. Location of Sample Collection The patients lived in the cities of Macapá (capital of the state of Amapá), Laranjal do Jari, and Porto Grande Municipalities (Figure 1). At the time of sample collection, patients had the following symptoms: fever in the last seven days, pain in the muscle, localized exanthema, and cough. Next Generation Sequencing and Genotyping Tree From the CHIKV samples, we were able to generate 10 complete or near-complete genome sequences and deposited them in GenBank at NCBI access numbers OL343608-OL343617. The identification of CHIKV genotypes was performed using phylogenetic analysis of full-length genome datasets and using an online tool (http://genomedetective.com/app/typingtool/chikungunya (accessed on 8 May 2022)). We also inferred a maximum likelihood tree that includes more CHIKV-Asian references and variants from other countries to give more support to the classification of our sequences (tree not shown). Both approaches indicated that all sequences generated in this study belong to the Asian lineage ( Figure 2). Next Generation Sequencing and Genotyping Tree From the CHIKV samples, we were able to generate 10 complete or near-complete genome sequences and deposited them in GenBank at NCBI access numbers OL343608-OL343617. The identification of CHIKV genotypes was performed using phylogenetic analysis of full-length genome datasets and using an online tool (http://genomedetective. com/app/typingtool/chikungunya (accessed on 8 May 2022)). We also inferred a maximum likelihood tree that includes more CHIKV-Asian references and variants from other countries to give more support to the classification of our sequences (tree not shown). Both approaches indicated that all sequences generated in this study belong to the Asian lineage ( Figure 2). Phylogenetic Analysis We applied the maximum likelihood (ML) criterion to construct phylogenetic trees of complete genomes of the CHIKV-Asian lineage using sequences from South Asia, the Caribbean, and the American continent. The aim here is to understand the relatedness of Brazilian sequences to CHIKV-Asian lineages. Initially, we used 257 CHIKV-Asian lineage sequences to construct a phylogenetic tree ( Figure S1, Supplementary Materials). This ML tree indicates that all sequences from the Caribbean and Americas are in a monophyletic clade previously designated the Caribbean lineage. The Caribbean lineage is related to Asian sequences, particularly strains from the Philippines detected in 2014 and 2016 (GenBank accession numbers MF773563 and MF773564, respectively). This lineage also includes one sequence from the 2015 strain from French Polynesia (KR559473) [28]. In addition, all Brazilian sequences are in the Caribbean clade, with the exception of one Brazilian sequence (695_Amapa_Brazil_2016) which is related to the sequence MF773564 from the Philippines. The clade formed by the sequences and MF773564 is at the base of the Caribbean lineage. For illustrative purposes, we also constructed a small version of the ML tree with 99 sequences (Figure 3), including all sequences from South Asian/Oceania (indicated in red in the tree), all Brazilian sequences (blue in the tree) and some Caribbean references (indicated in green in the tree). This tree equally shows the monophyletic pattern of Brazilian sequences of the Caribbean lineage that are grouped into a single clade with high statistical support (0.99). In addition, the tree indicates that the sequence 695_Amapa_Brazil_2016 is not in the Caribbean clade, and is related to one sequence from the Philippines in 2014. Phylogenetic Analysis We applied the maximum likelihood (ML) criterion to construct phylogenetic trees of complete genomes of the CHIKV-Asian lineage using sequences from South Asia, the Caribbean, and the American continent. The aim here is to understand the relatedness of Brazilian sequences to CHIKV-Asian lineages. Initially, we used 257 CHIKV-Asian lineage sequences to construct a phylogenetic tree ( Figure S1, Supplementary Materials) This ML tree indicates that all sequences from the Caribbean and Americas are in a monophyletic clade previously designated the Caribbean lineage. The Caribbean lineage is related to Asian sequences, particularly strains from the Philippines detected in 2014 and 2016 (GenBank accession numbers MF773563 and MF773564, respectively). This lineage also includes one sequence from the 2015 strain from French Polynesia (KR559473) [28]. In addition, all Brazilian sequences are in the Caribbean clade, with the exception of one Brazilian sequence (695_Amapa_Brazil_2016) which is related to the sequence MF773564 from the Philippines. The clade formed by the sequences and MF773564 is at the base of the Caribbean lineage. For illustrative purposes, we also constructed a small version of the ML tree with 99 sequences (Figure 3), including all sequences from South Asian/Oceania (indicated in red in the tree), all Brazilian sequences (blue in the tree) and some Caribbean references (indicated in green in the tree). This tree equally shows the monophyletic pattern of Brazilian sequences of the Caribbean lineage that are grouped into a single clade with high statistical support (0.99). In addition, the tree indicates that the sequence 695_Amapa_Brazil_2016 is not in the Caribbean clade, and is related to one Polyphyletic versus Monophyletic Pattern of CHIKV from Amapá We applied a maximum likelihood hypothesis test (Shimodaira-Kishino test) to evaluate the monophyletic versus the polyphyletic pattern of our CHIKV sequences from Amapá because sequences of the Caribbean lineage have reduced diversity (less than 1% of nucleotide divergence). This low divergence, besides producing near zero branch lengths, can also impact on the grouping pattern of ML trees. To provide more support to the circulation of distinct CHIKV variants, we constructed one tree using a coalescent approach assuming that all sequences from Amapá were monophyletic, including the sequence 695_Amapa, (Figure 4a), and tested against a coalescent tree in which the sequences 695_Amapa in not monophyletic (Figure 4b). The Shimodaira-Kishino test indicated that the polyphyletic tree (Figure 4b) is the most likely tree because it has the best-log likelihood compared with the alternative monophyletic tree (i.e., 21,146.93 and 21,121.1, respectively). These results indicate that the 695_Amapa is not closely related to sequences of the Caribbean lineage. Polyphyletic versus Monophyletic Pattern of CHIKV from Amapá We applied a maximum likelihood hypothesis test (Shimodaira-Kishino test) to evaluate the monophyletic versus the polyphyletic pattern of our CHIKV sequences from Amapá because sequences of the Caribbean lineage have reduced diversity (less than 1% of nucleotide divergence). This low divergence, besides producing near zero branch lengths, can also impact on the grouping pattern of ML trees. To provide more support to the circulation of distinct CHIKV variants, we constructed one tree using a coalescent approach assuming that all sequences from Amapá were monophyletic, including the sequence 695_Amapa, (Figure 4a), and tested against a coalescent tree in which the sequences 695_Amapa in not monophyletic (Figure 4b). The Shimodaira-Kishino test indicated that the polyphyletic tree (Figure 4b) is the most likely tree because it has the best-log likelihood compared with the alternative monophyletic tree (i.e., 21,146.93 and 21,121.1, respectively). These results indicate that the 695_Amapa is not closely related to sequences of the Caribbean lineage. Sequences from Caribbean countries are in green color and sequences from South Asia/Oceania are in red color. The cluster composed by Caribbean sequences plus almost all Brazilian sequences is labeled. The Brazilian sequence related with a Philippine sequence is highlighted. The branch support is indicated by a color scale of 0 to 1, and is based on the Shimodaira-Hasegawa-like test. The tree was inferred using the TN-93 model plus gamma correction. Horizontal bar indicates the nucleotide substitution per base. For better visualization of the tree some sequences were collapsed (blue triangles). Time-Scaled Tree We applied a Bayesian coalescent approach to better understand the temporal pattern and the topology of trees of the CHIKV-Asian lineage. Initially, the linear regression of root-to-tip genetic distance against the sampling date from our dataset revealed a sufficient temporal signal (r2 = 0.79, Supplementary Figure S2). Next, we used a constant population size coalescent model to infer a tree in which no CHIKV of the Caribbeanlineage sequences were included besides the Brazilian sequences ( Figure 5). The estimated substitution rate of the CHIKV-Asian lineage is evolving at 6.7 × 10 −4 substitutions per site per year. This time-scaled Maximum clade credibility tree (MCC tree) shows that nearly all Brazilian CHIKV sequences grouped in a single clade highly the posterior probability (PP = 0.99). The exception was the sequence 695_Amapa which was grouped with sequences from the Philippines. We also used a model (Skygrid) to evaluate the fluctuation of the effective population size over time. These results indicate that CHIKV was introduced in the Caribbean region in early 2013 from South Asian variants (probably by Philippian variants) (PP= 1.0). Following this introduction, the number of infections increased steadily (Supplementary Figure S3). the posterior probability (PP = 0.99). The exception was the sequence 695_Amapa which was grouped with sequences from the Philippines. We also used a model (Skygrid) to evaluate the fluctuation of the effective population size over time. These results indicate that CHIKV was introduced in the Caribbean region in early 2013 from South Asian variants (probably by Philippian variants) (PP= 1.0). Following this introduction, the number of infections increased steadily (Supplementary Figure S3). The MCC tree also showed that the Amapá sequences were grouped into different clades, suggesting the introduction of this genotype in Amapá occurred on at least two The MCC tree also showed that the Amapá sequences were grouped into different clades, suggesting the introduction of this genotype in Amapá occurred on at least two occasions in the middle of 2014. In addition, the tree indicates that the sequence 695_Amapa is at the base of the Caribbean lineage ( Figure 5). The presence of non-synonymous substitutions already described as potential related to vector adaptability was investigated among sequences from Amapá. We observed that all 10 Amapá sequences do not carry either the substitutions A226V (E1 protein) or the L210Q (E2 protein) associated with increased CHIKV transmission in A. albopictus mosquitos. On the other hand, other substitutions related to the vector adaptability, such as T98A, A377V, M407L (E1 protein), G60D, and A103T (E2 protein), were present, as shown in Table 2. Discussion The virus has spread across the region due to the widespread movement of individuals across the Brazil-Guyana border, notably for commercial reasons, as well as the region's vulnerable health system. Unauthorized logging and mining in the Oiapoque River basin have expanded human activity in the forest region, leading to the establishment of new communities near the municipality of Oiapoque. Brazilian miners' ventures deep into the forest have polluted the ecosystem with heavy metals used in gold mining, resulting in increased migratory movements between neighboring countries [29]. This uncontrolled flow of people creates a unique set of challenges for local health services since these highly mobile individuals are hard to track, making it difficult to assess resource needs and plan measures at the local level [30,31]. DENV-4 [32], DENV-1 [27], and CHIKV [14] are only a few examples of viral variations that have recently been found in the Brazilian Amazon. To better understand CHIKV evolution in the state of Amapá, we sequenced ten fulllength genomes and performed a phylogenetic analysis. Almost all CHIKV sequences from Macapá and Laranjal do Jari municipalities clustered in a monophyletic phylogroup, with one sequence from Macapá municipality (695 Amapá) showing a unique clustering pattern linked to a sequence reported in the Philippines in 2014 (Genbank ID: MF773563) [28]. Furthermore, the 695 Amapá and MF773563 sequence clusters are found near the base of the Caribbean lineage. With approximately 16,000 people infected between 2014 and 2015, French Guyana became the first South American country to declare CHIKV infections autochthonous in 2014. It has been claimed that South Pacific strains gave rise to the Caribbean lineage [33,34]. More recently, it has been proposed that the Caribbean lineage was imported either directly or indirectly from Southeast Asia [28]. This possibility is strengthened by our phylogenetic research. Furthermore, we discovered one Amapá sequence that, along with the Philippian strains, forms the foundation of the Caribbean lineage's phylloclade. It's worth noting that, when compared to Asian/Oceanic comparisons, all Caribbean sequences include two conventional substitutions: V226A in the E2 gene and L20M in the 6K gene. Some alterations have been shown to have an effect on CHIKV fitness, mostly by interfering with the virus's ability to transmit. In the Indian Ocean outbreak, for example, the CHIKV E2 glycoprotein substitution A226V has been linked to increased transmission by Aedes albopictus, which is thought to have the lowest vector capacity for CHIKV transmissibility than Aedes aegypti [35]. This replacement has also been implicated in European outbreaks involving Aedes albopictus [36]. L210Q arose in IOL strains in India and was linked to greater CHIKV transmission by the Aedes albopictus vector [37], while V226A boosted viral dissemination in Aedes aegypti but not in Aedes albopictus [38,39]. To our knowledge, none of the Brazilian strains (Asian or ECSA lineages) contain the substitutions A226V in the E1 glycoprotein or L210Q in the E2 glycoprotein. On the other hand, the Amapá sequences, as well as other Asian strains, have threonine (T) in position 98 of the E1 glycoprotein, which limits their infectivity in Ae. albopictus through the substitution A226V [40]. In particular, the sequence 695_Amapa has the residues V226 in the E2 gene and L20 in the 6K gene, likewise the sequences from South Asian/Oceania. Moreover, we found six amino acid changes in Amapá sequences (nsP1, capsid, E1, and E2 proteins), none of which have been described previously. The capsid, E2, and E1 glycoproteins are responsible for recognizing the host-receptor and entry into the cell [41,42], and they are widely used for vaccine development and serodiagnostic assays [43]. These substitutions in the E1 and E2 proteins could cause viral evasion and potentially allow host-switching of the CHIKV-Asian lineage in the Amazon region [44]. On the American continent, the CHIKV-Asian lineage is widely distributed, while the CHIKV-ECSA lineage is restricted only to Brazil, Paraguay, and Haiti [45,46]. In contrast, in Brazil, the CHIKV-Asian lineage was limited to a small number of cases and was geographically restricted (predominantly in the state of Amapá), whereas the ECSA lineage was widespread [20,47]. It has been shown that CHIKV-Asian is less capable of accumulating mutations that facilitate their transmission through vectors when compared to the ECSA lineage [48]. In Roraima, a state in the north of Brazil that borders Venezuela, the CHIKV-Asian lineage was introduced in 2015 from northeastern Brazil, and a large outbreak in 2017 was caused by the ECSA genotype that gradually replaced the original CHIKV strains [18]. Although Roraima and Amapá have very similar climate conditions, the CHIKV-Asian lineage continued to be predominant in the state of Amapá. It is important to mention that Ae. aegypti and Ae. albopictus are endemic in the Amazon region [49]. In addition, the dissemination of CHIKV-Asian by Ae. albopictus might be restricted due to the genetic background of the virus [39]. This pattern of CHIKV strain replacements in the Amazon region is probably affected by multiple factors such as the ecological community, human behavior, and the genetics of the virus. The modest number of CHIKV sequences we were able to recover from the samples was perhaps the most significant limitation of our research. This has restricted our ability to examine the genetic diversity of CHIKV in depth. Multidisciplinary research will be required to understand the key factors that contribute to the transmission dynamics of CHIKV in the Amazon. Conclusions Despite the low frequency of CHIKV in our samples, we observed various strains circulating in the Amapá, indicating that CHIKV-Asian transmission in the Amazonian region originated from different geographical locations. Institutional Review Board Statement: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent was obtained from adult individuals and from all parents or guardians of children participants involved in the study. Ethics Committee approval was granted by the Faculdade de Medicina da Universidade de São Paulo (CAAE: 53153916.7.0000.0065). Informed Consent Statement: Written informed consent has been obtained from the patient(s) to publish this paper.
2022-07-03T15:23:40.249Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "4a753d91fb08c4b17ef993626da7c349fa68910c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/7/1445/pdf?version=1656670127", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3ad52165cbd02259759e82a05fb478deb286ae1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234024353
pes2o/s2orc
v3-fos-license
Evaluation and Lessons Learned from a Campus as a Living Lab Program to Promote Sustainable Practices : Any group that creates challenging goals also requires a strategy to achieve them and a process to review and improve this strategy over time. The University of British Columbia (UBC) set ambitious campus sustainability goals, including a reduction in its greenhouse gas emissions to 33% below the 2007 level by 2015, and 100% by 2050 (UBC, 2006). The University pursued these goals through a number of specific projects (such as major district energy upgrade and a bioenergy facility) and, more generally, through a “Campus as a Living Lab” (CLL) initiative to marry industry, campus operations, and research to drive innovative solutions. The CLL program has achieved significant successes while also demonstrating many opportunities for improvements and lessons learned. The aim of this study was to examine the UBC CLL program, to identify and formalize its operations, to extract key transferable characteristics, and to propose replicable processes that other universities and municipalities can follow to expand their sustainable practices in similar ways. There was a learning curve with implementing a CLL program at UBC; thus, the goal of this study was to potentially shorten this learning curve for others. The research involved an ethnographic approach in which researchers participated in the CLL process, conducted qualitative analysis, and captured the processes through a series of business process models. The research findings are shared in two parts: (1) generalized lessons learned through key transferrable characteristics; (2) a series of generic organizational charts and business process models (BPMs) culminated with learned strategies through defined processes that illustrate what was required to create a CLL program at UBC. A generalized future improvement plan for UBC CLL programs is defined, generic BPMs about CLL projects are evaluated, and the level of engagement of multiple stakeholders through phases of project life cycle given in the conclusion for future use of other Living Lab organizations. Introduction Universities play a vital role in addressing the global sustainability challenges and opportunities, because they are the intuitional platforms where research, educational activities, community engagement, and operations meet to produce a long-lasting impact on societal change [1,2]). Higher education institutions have been instrumental for transforming societies with regard to sustainable development. However, it takes substantial time for these institutions to explore sustainable development implementations and holistically integrate these to their systems [3,4]. By educating future leaders and community members about sustainability, International Sustainable Campus Network (ISCN) member universities such as the University of British Columbia (UBC) are dedicated to embedding sustainability in curricula, operations, research, and public-private partnership visions: In 2014, The University of British Columbia (UBC) approved a 20 Year Sustainability Strategy, which covered a wide spectrum of university activities including an enhanced focus on developing research within and outside the university involving strategic partnerships with industry and government, a renewed focus on university operations and infrastructure through the lens of the living lab to accompany UBC's goal of eliminating greenhouse gas emissions by 2050, and within teaching and learning, a renewed institutional commitment to embed sustainability learning across all undergraduate teaching programs by 2035 [5]. The Campus as a Living Lab (CLL) program, which is the focus of this research (This research's content is mostly drawn from the thesis of Paul Save, 2014 [6]), addresses collaboration between UBC's building operations, external companies, and researchers, in an effort to creatively and economically meet operational requirements while striving towards the goal of eliminating greenhouse gas (GHG) emissions. The University of British Columbia's Campus as a Living Lab (UBC CLL) program involved significant effort by many groups and has been seen as a very useful initiative, however the full extent and working of the program were largely unknown to many of the participants. This research set out to examine the UBC CLL program as a major activity to promote technological innovation in sustainability, to identify and formalize its operations, to capture lessons learned and opportunities for improvement, and to propose a generic version of the CLL program to serve as a guide for other organizations interested in a similar initiative. There was a learning curve with implementing a CLL program at UBC; therefore, the goal of this study is to potentially shorten this learning curve for others. The research involved an ethnographic approach in which researchers participated in the CLL process, conducted qualitative analysis of process outcomes-key transferrable characteristics-and captured the business processes through a series of business process models. UBC CLL program analysis has the potential to be a demonstrative example for all large organizations looking for managerial models for Living Labs. As identified in the literature review in the following background section, the need for a structured managerial model and standardized tools for decreasing the complexity of innovation activities and operational processes for living labs have been defined. Background The living laboratory (LL) concept is defined as "the co-creation process in integrating research and innovation in a systematic way, on a given territorial context" [7,8]. A wide variety of activities are carried out under the umbrella of living labs, and they feature many different methodologies and research perspectives [8,9]. Westerlund and Leminen also define living labs as "physical regions or virtual realities, or interaction spaces, in which stakeholders from public-private-people partnerships (4Ps) of companies, public agencies, universities, users, and other stakeholders, all collaborating for creation, prototyping, validating, and testing of new technologies, services, products, and systems in real-life contexts" [10,11]. Thus, living labs or urban labs are the collaborative entities of multiple stakeholders that are used by communities for innovation. Literature Review on Living Lab Organizations and Institutions of Higher Education (IHE) In the context of institutions of higher education (IHE), the campus acts as a living lab with its role and function as a teaching and learning institution of an educated society that allows more robust research output, improve the campus operation, and as a societal learning process arena. The campus sustainability has a great deal of potential and a role to play in translating sustainable development from a concept into more tangible results in a structured way [8]. UBC Campus as a Living Lab (CLL) program is one of the first CLL programs that tries to culminate multiple stakeholders considering most of the common indicators (CI)s [12] which the International Sustainable Campus Network (ISCN) defines. According to Kılkış's review study [12] on ISCN common indicator reporting, as of 2015, the ISCN consisted of 73 Sustainability 2021, 13, 1739 3 of 26 member university campuses [13]. In total, 36 member universities provided public reports in the ISCN-GULF Sustainable Campus Charter Directory [13]. The ISCN-Global University Leaders Forum (GULF) Charter obliges campuses to abide by three guiding principles. First, campuses must consider aspects of sustainability in the process of planning, constructing, renovating, and operating buildings. Second, sustainability goals must be integrated into campus-wide master planning. Third, campuses must align education and research with the aim of being a living laboratory for sustainability [14] in [12]. Campuses are given the flexibility to report on these principles according to their needs and interests [15] in [12]. Kılkış's review research aimed to compare the ISCN member campuses that provided publicly available ISCN reports by a systematic search for common indicators (CIs). The indicators that were used in the reporting practices were clustered into themes. Energy, water, waste, CO 2 emissions, transport, education, and research are the themes according to associated values for the relevant universities, which were compared on a quantitative basis. UBC has been stated as one of first three contributors with five reported CI themes after GU and Stanford [12]. UBC campus defined as active in an emerging role of being a living laboratory for more sustainable practices toward managing environmental quality. While the term "living lab" was used as early as 1999 by Kidd [16], since then various studies emerged defining the "Living Lab" and "Campus as a Living Lab" concepts, generalizing the themes and reviewing Higher Education Institutions' role and efforts via literature review and surveys analysis [8,12,[16][17][18][19][20][21][22][23][24][25]. However, management of multiple stakeholders and processes through CLL or the Urban Lab program has been thoroughly reviewed by only few studies [26][27][28][29][30][31]. Zen [26] is one of the main authors who discussed strengthening the campus sustainability initiatives by developing an integrative framework of "transformative and integrative approach of the university living learning labs". Leminen and Westerlund [27] proposed three ways to organize innovation activities in living labs-1. Standardized tools decreasing the complexity of innovation activities leading to predefined incremental innovation outcomes; 2. A predefined linear innovation process decreases the complexity of innovation activities; 3. Adopting an iterative, non-linear innovation process and customized tools for innovation activities increases the likelihood of a novel innovation outcome-by providing a set of implications to theory and practice, and suggesting directions for future research on living labs in their 2017 study titled "Categorization of Innovation tools in Living Labs". Sonetti et al. [28] proposed a new campus sustainability assessment (CSA) approach on campus typologies for meaningful comparisons and clustered problems related to current sustainability framework development, preparing charts according to case studies reviewed through this work. Voytenko et al. [29], on the other hand, define LL categories such as sustainable living labs (SLLs) and urban living labs (ULLs), where public-private-people partnerships (PPPPs) within LLs create beneficial preconditions to connect sustainable innovations with the market and society, and thus potentially advance sustainable urban transitions. The authors define the challenges occur due to collaborative alignment work with key stakeholders having divergent interests. Velazquez et al.'s 2006 study [30] presents a proposed comprehensive managerial model for a sustainable university, integrating the multiple roles of its campuses including partnerships for sustainability through empirical data collected from 80 Higher Education Institutions around the world. Through evaluated literature, the need for a structured managerial model and standardized tools for decreasing the complexity of innovation activities and operational processes has been defined. Additionally, it is stated that multiple stakeholders' divergent interests must be addressed by collaborative alignment strategies. Therefore, there is a need for a roadmap for collaborative integration of multi-stakeholder projects defined by various studies. With this study, an evaluation and lessons learned from the UBC Campus as a Living Lab program outcomes are shared as a roadmap, and business process models (BPM) derived from public-private partnership projects at the University of British Columbia will also be generalized for future use of living lab organizations. History of Sustainability and the CLL Program at UBC An important early spark of sustainability at UBC began with the signing the Talloires Declaration in 1990 [32]. This declaration arose from the convening of "twenty-two university presidents and chancellors in Talloires, France, to voice their concerns about the state of the world and create a document that spelled out key actions institutions of higher education must take to create a sustainable future" [33]. In 1997, "UBC became the first university in Canada to adopt a sustainable development policy" [34]. This policy directed UBC to create Canada's first sustainability office in 1998. Sustainability activities accelerated in 2006, when UBC developed a four-year sustainability strategy and, one year later, became "one of six founding signatories to the University and College Presidents' Climate Statement of Action for Canada" [33]. Sustainability then became part of UBC's core mandate, with several sustainability-related goals incorporated into the University's overall strategic plan, including the goal to "make UBC a living laboratory in environmental and social sustainability by integrating research, learning, operations with industrial and community partners" [33]. UBC began the process of developing plans and initiatives to pursue this goal, beginning with the development of a "Sustainability Academic Strategy" in 2009. This strategy led to the creation of an organizational focus for sustainability at UBC, the University Sustainability Initiative (USI), with the objective to integrate campus-wide academic and operational sustainability efforts. The USI started with two major initiatives: the "Campus as a Living Laboratory" (CLL) program to achieve influence through the campus's own operations in collaboration with academic functions and outside with industry, and the "Agent of Change" initiative to drive change through the campus's procurement and supply chain mechanisms [34]. The USI also developed recommended goals to reduce the university's greenhouse gas emissions to 33% below 2007 levels by 2015, 66% by 2020, and 100% by 2050; these goals were then included in UBC's climate action plan [35]. The USI had its first formal meeting in March 2010, and it was able to pass a budget for the program by the following April to support the initiatives [36]. This budget also supported the CLL in starting to reduce GHG emissions. The first projects to be completed were the Centre for Interactive Research on Sustainability (CIRS) Building, designed as a testbed for building science research [37]; the bio-energy research and diversification centre in 2012 [38]; and the Academic District Energy System, a retrofit to convert the campus heating infrastructure from steam to hot water [39,40]. The result of this conversion was expected to reduce GHG emissions by 22 percent while saving "CAD 5.5 million in annual savings including the cost investment for not reinvesting an aging steam system" [41]. UBC Campus as a Living Lab Organization The Campus as a Living Lab (CLL) organization has experienced a steep learning curve. The UBC CLL has been a loosely defined initiative from its beginning. The CLL program grew to involve dozens of individuals (academics, board representative member, student advisory members, various managerial staff) participating in several different committees under the University Sustainability Initiative (USI) in 2014 (see Figure 1). The major activity of the CLL was centred on projects that usually corresponded to major building or infrastructure developments on campus, with some off-campus operational projects that are relatively smaller. The CLL program works to identify opportunities for CLL projects, to review project proposals (internal and external) to select which should proceed, and to participate in the development through to completion of the CLL projects. Some of the main organizational groups involved are a high-level steering committee, a working committee (the focus of the majority of CLL effort), and CLL project groups for each project that moves forward. Figure 2 shows the organizational chart (as of 2014). chance to provide input before a project is proposed to the board of governors (BoG) for approval. This does not only provide an opportunity to refine the scope of a project, but it also creates greater alignment between the intent of the project and the needs of the institution and broader community. However, the aim of incorporating multiple stakeholders also incurred higher management and overhead costs. Therefore, reorganization was needed to remove duplicated effort in other parallel research and organizations related within UBC (see Section 6 discussion). Objective and Research Questions The main objective of this ethnographic study was to document and model the CLL related processes; the stated questions below directed the methods of analysis. Lessons Consisting of a group of individuals from various areas and levels of authority that review CLL projects (Figure 1), the intent of the CLL Working Group was to provide a review process which ensured that a majority of stakeholder representatives have a chance to provide input before a project is proposed to the board of governors (BoG) for approval. This does not only provide an opportunity to refine the scope of a project, but it also creates greater alignment between the intent of the project and the needs of the institution and broader community. However, the aim of incorporating multiple stakeholders also incurred higher management and overhead costs. Therefore, reorganization was needed to remove duplicated effort in other parallel research and organizations related within UBC (see Section 6 discussion). Objective and Research Questions The main objective of this ethnographic study was to document and model the CLL related processes; the stated questions below directed the methods of analysis. Lessons learned and ideas for improvement through answered research questions can provide a valuable reference for others interested in pursuing their own CLL strategies. Main Research Questions Research questions before starting the ethnographic study were: • How do CLL Working Group members and Steering Committee members interact during meetings to mediate the problems on the integration process of multiple stakeholders' needs? • How do these meetings lead to a better process definition for future actions? • Is it possible to document the development of the project processes through different charts for enabling a replicable method for universities and cities? Methodology It has been a challenge, even for people who have been directly involved in the CLL program, to clearly define its processes and practices in a way that would allow the program to be replicated elsewhere. Therefore, two different studies-both built on a broader ethnographic study-were conducted to formalize the CLL process, capture lessons learned, and generalize an idealized process roadmap, process model overview, level of engagement chart of multiple stakeholders that others could replicate. This paper first graphically summarizes the chronologic history of organizational structures within the UBC's sustainability scope built on the qualitative analysis of collected data. Then, business process models (BPMs) created for easing the evaluation of case projects are summarized and some key diagrams are created to define the overall UBC CLL processes. Lessons learned, captured from both the qualitative analysis and the BPM generation process, are presented and used to propose a generic version of the CLL process that others could replicate. The methodology involved extensive data collection through document collection, interviews, analysis of numerous meeting minutes, and direct observation of 36 CLL meetings over 16 months. Analysis involved qualitative analysis of the dataset, the development of business process models of the actual work processes followed on three large projects, the distillation of lessons learned from the interviews and feedback sessions, and the development of a set or proposed generic CLL processes and tools to service as a guide for other organizations. Ethnographic Study Coding During the ethnographic study, 98 weekly CLL Working Group and 20 monthly Steering Committee meeting minutes and associated documentation were reviewed. Based on the meeting minutes and the researchers' field notes, the items discussed in the meetings were coded to classify the topics and business processes. The contents of these meetings were encoded against a classification system derived to identify the correlation between committee discussions and CLL processes. This facilitated an understanding of the evolution of the CLL, and provided a foundation for developing a method of writing notes on project updates, project assessment, project funding, and recruitment options of researchers for designated projects as well as creating new project opportunities for researchers and students. Furthermore, the researchers conducted numerous interviews to uncover the history leading up to the current CLL implementation. The findings from three more interviews with key people conducted after a restructuring in 2017 are also included in this paper. In order to develop a coding scheme, two international standards that define a range of business processes were adopted to provide a framework with which the meeting contents could be coded. These standards were version 6.0.0 of the Cross Industry Process Classification Framework from the American Productivity and Quality Centre [42], and the fourth edition of the Project Management Body of Knowledge from the Project Management Institute [43]. This framework was further refined by excluding processes that were found to be irrelevant to the CLL discussions and adding some processes that were not adequately defined in the APQC/PMI standards. Each item discussed in the observed meetings was then coded against this final framework. An example is shown in Figure 3, which shows a sample of the business processes defined in the coding framework in the matrix rows, and three columns representing three CLL meetings. The number of instances of a particular type of business process discussed in each meeting is then represented as a number in the matrix cells, with the total number of instances summed in the right-hand column. The results of this analysis enabled the researchers to identify a series of business processes that were demonstrably important for the CLL program. Business Process Modelling Methodology The CLL processes identified through the qualitative analysis-both as-followed for the three major projects that it undertook as it was being formed, and as-intended for future projects-were formally modelled and mapped (see Section 6.1) using a combination of spider charts to assess the individual characteristics of candidate CLL projects, and business process modelling visualization techniques for charting the CLL processes. The legend for business process models (BPMs) created is explained in Figure 4. All the case project BPMs can be encoded via this legend. Figure 3. Example of plotting data points across the Campus as a Living Lab process framework. The first column illustrates a portion of the coding system developed for this study. Columns 2, 3, and 4, represent three sample meetings, illustrating the mapping of individual discussion items with specific coding items. The total count of all coded items for each code is shown in the fifth column. The final step in the research methodology was to analyse the "as-is" CLL program, based on both the researchers' personal assessments built on ethnographic coding and on interviews with CLL participants, as well as the case-based BPMs. In order to produce generalized BPMs for future CLL processes at UBC and a better roadmap for CLL committees and programs that could be implemented elsewhere, generic BPM diagrams and organizational structure schemes were created as an outcome. Analysis of the CLL Process Analysis of the overall CLL process with defined methods is presented both quantitatively and qualitatively via generated business process models (BPMs). Figure 5 shows the percentage of data points that were listed in each coding category of ethnographic study on UBC CLL (as illustrated in Figure 3). This provides a summary of the quantity of attention given for each major category of business process. Further analysis decomposed this summary into finer detail within each process category, and also assessed how the relative focus changed over time as the CLL program was initiated and became increasingly established. particular emerged. For the first 260 data points, it appeared that the categories "(3.0) Develop Opportunities" and "(1.0) Develop Vision, Strategy, and Assessment Tools" were in constant flux. This fluctuation indicates movement between conducting the work itself and trying to improve strategies for the work being conducted. From data points 261 onwards, "(4.0) Assess the Environment", and "(3.0) Develop Opportunities" were in flux. This was due to the current urgency to develop a comprehensive energy plan for the UBC campus for high carbon footprint reduction targets to be met in time. A review of how priorities shift over time was completed to understand how the CLL Working Group changed their workload or scope ( Figure 6). All categories were first graphed together to identify any interesting relationships. Patterns for three categories in particular emerged. For the first 260 data points, it appeared that the categories "(3.0) Develop Opportunities" and "(1.0) Develop Vision, Strategy, and Assessment Tools" were in constant flux. This fluctuation indicates movement between conducting the work itself and trying to improve strategies for the work being conducted. From data points 261 onwards, "(4.0) Assess the Environment", and "(3.0) Develop Opportunities" were in flux. This was due to the current urgency to develop a comprehensive energy plan for the UBC campus for high carbon footprint reduction targets to be met in time. Business Process Models of UBC CLL Driven from Qualitative Analysis of Project Meeting Outcome The business process models (BPMs) are depicted according to timeline, and involved multiple stakeholders, and legend-defining illustrations containing the item responsible for carrying out the process at the time by the involved stakeholder group (Fig-Figure 6. Campus as a Living Lab Working Group's flux of priorities from 6 December 2012, to 27 March 2014. Business Process Models of UBC CLL Driven from Qualitative Analysis of Project Meeting Outcome The business process models (BPMs) are depicted according to timeline, and involved multiple stakeholders, and legend-defining illustrations containing the item responsible for carrying out the process at the time by the involved stakeholder group (Figure 4). Project-Specific Business Process Models of Representative UBC CLL Projects Three main UBC CLL representative projects illustrate the pathway of developing the project proposal and acceptance process for the UBC CLL. In deciding how to potentially improve on future projects, this ethnographic study also followed these three case studies: CIRS, the Academic District Energy System, and the Bioenergy Research and Demonstration Facility. These case studies are examples that gave reference cases to a building, an analysis of campus wide infrastructure options and a facility project, respectively. They represent key initiatives to address UBC's long-term sustainability goals. These CLL projects were studied in detail, and the CLL-related business processes that were followed identified and mapped. These projects are summarized as follows: (1) The Centre for Interactive Research on Sustainability (CIRS, Figure 7): CIRS is a CAD 36.9 million project to create a building that is, itself, a "living laboratory" to test the ability to meet aggressive, net positive goals (The building owner representative's overall goals for CIRS are to be a net-positive energy producer and a net-zero carbon building. It is designed with the intention of being a "living lab" (Robinson et al., 2013) with ongoing performance monitoring and activities to further improve performance. The building is equipped with an energy monitoring system (EMS) and a building management system (BMS). Data collection from over 3000 monitoring points (occupancy sensors, CO 2 , VOC, temperature of rooms, energy meters, many details of HVAC operations such as pump and fan temperature and flow details, window status sensors, solar PVs and transmitters, water reclamation and irrigation system details) has been available since the building was fully occupied and operating in 2012. The building is also equipped with a tertiary water treatment system [44]. (2) The Academic District Energy System (Figure 8) emerged from a larger initiative (building operations) to review alternative energy sources at UBC. In addition to an initial feasibility study, a local "Energy X Contest" was also created for people at UBC to pitch their ideas for additional options for UBC to pursue. Key drivers for the project included aging infrastructure, skyrocketing natural gas prices, newly implemented carbon taxes, and public sector offset requirements that caused the campus to look for ways to reduce the carbon footprint. The result was an CAD 88 million project to convert the campus district heating system from steam to hot water. The project entailed the conversion of 131 buildings from steam to hot water, 14 kilometres of hot water distribution piping, and a new 60 MW hot water thermal energy centre. This resulted in "CAD 5.5 million in annual savings" and a reduction in GHG emissions by 22 percent After UBC issued a request for information to develop strategic partnerships with industry in 2011, an increasing number of companies approached UBC wishing to collaborate. UBC had been following an ad-hoc process to pursue projects, but it was found that a more formal structure was needed if it was to scale its CLL program efforts successfully. Due to an influx of unsolicited requests, the CLL needed to adapt to a more proactive, rather than reactive, model of governance. In order to achieve this, assessment tools for varying levels of analysis were developed to evaluate project fit with UBC. The CLL program projects evolved into two main categories: unsolicited and solicited proposals ( Figure 10). While UBC has a formal process for all CLL requests for proposals, for strategic sustainability reasons and in accordance with its innovative CLL approach, UBC entertains unsolicited proposals with a refined process. These are subject to a screening procedure that is as rigorous as the formal request for proposal process. From the produced case study BPMs, the evolution of each process for different project types on campus can be followed by timeline, stakeholder involvement, and items required. General Business Process Models Given as Steps of Project Requests at UBC CLL One of the main goals of the CLL program is to use the UBC campuses as a testbed for the potential commercialization of products that can help with campus sustainability. UBC can act as a launching pad for technologies to move out of the lab and into main-stream use. Industry collaborators are interested in fast, effective, and value-orientated solutions to develop their products; therefore, the CLL can be seen as a path-of-least-resistance testing environment. The value proposition of the CLL to industry is that it can provide additional researcher capacity for development, assistance (with potentially government funding to match industry investment), monitoring, and verification of results in a lab environment. After UBC issued a request for information to develop strategic partnerships with industry in 2011, an increasing number of companies approached UBC wishing to collaborate. UBC had been following an ad-hoc process to pursue projects, but it was found that a more formal structure was needed if it was to scale its CLL program efforts successfully. Due to an influx of unsolicited requests, the CLL needed to adapt to a more proactive, rather than reactive, model of governance. In order to achieve this, assessment tools for varying levels of analysis were developed to evaluate project fit with UBC. The CLL program projects evolved into two main categories: unsolicited and solicited proposals ( Figure 10). While UBC has a formal process for all CLL requests for proposals, for strategic sustainability reasons and in accordance with its innovative CLL approach, UBC entertains unsolicited proposals with a refined process. These are subject to a screening procedure that is as rigorous as the formal request for proposal process. The CLL program projects evolved into two main categories: u ited proposals (Figure 10). While UBC has a formal process for all C posals, for strategic sustainability reasons and in accordance with its proach, UBC entertains unsolicited proposals with a refined process. a screening procedure that is as rigorous as the formal request for pr The Strategic Partnership Office, CLL Working Group, CLL Steering Committee, and Institutional Project Approval Group are groups involved in the Solicited Request approvals, and the approval process can take up to three years. Fast evaluation needs of unsolicited proposals increased the threshold value of proposals to CAD 5 million in 2016 [46], which was stated before as over CAD 2.5 million in 2011 ( Figure 11). Capital projects that were over CAD 2.5 million needed to be evaluated with a rigorous process, which could take up to eight months. Projects not requiring this approval were able to proceed much quicker than those that required it. In either case, the length of time required to complete a project can be longer than participating company may anticipate [40]. The first stage (step 1) of an unsolicited request requires completing an online form and submitting a two-page proposal. This first step is crucial in ensuring that UBC's objectives align from the beginning of the project, and that it has been tailored to ensure that the information addresses specific questions. As a second step (step 2), the proposal is reviewed by the Strategic Partnerships Office, who provides feedback to the CLL Working Group for review. This is an important step because these reviews are carefully done by a diverse team of administrative individuals who contribute various areas of campus expertise and who examine the four cornerstones of CLL projects: (1) The integration of UBC's core academic mandate (research and teaching) with the University's operations; (2) Partnerships between the University and private sector, public sector, or NGO organizations; (3) Sound financial use of UBC's resources and infrastructure; (4) The potential to transfer the knowledge UBC gains into practical, positive action applicable to the greater community [47]. Then, the proposal is transferred to the CLL steering committee members in (step 3). The CLL Steering Committee consists of a group of individuals from various areas and levels of authority that provide another thorough review of the project (Figure 2). If the working group considers the project to have potential, then in (step 4), the Strategic Partnerships Office will pursue the company for additional information to further review with the CLL Working Group. If the CLL Working Group agrees that there is a fit, then a champion for the project is identified, preferably an academic (step 5) (appointing a champion for a project can prove challenging when everyone already is balancing a full-time workload; an award policy was believed to increase attraction). Once a project champion has been appointed, then a presentation is made to the CLL Steering Committee by the project champion lead for final vetting before an informal steering committee is created to develop a memorandum of understanding. Then, with the memorandum of understanding is in place, formal committees are struck to govern the project in (step 6). If UBC funding is required, then a detailed business case would also be created for institutional project approval in (step 7). For UBC, this body is the board of governors. The number of review points and the number of groups reviewing projects before presenting proposals to the board of governors ensures that a majority of stakeholder representatives have had a chance to provide input before a project is initiated for funding. Refining the scope of the project can help to ensure that UBC receives just what they need at the time they need it. Integrating researchers on projects is a key component of the CLL program itself; it is important to develop as many avenues as possible to find the right people to work on a project (such as charrettes, steering committee meetings, etc.). Finding the right people can involve breaking down silos and fostering greater interdisciplinary collaboration. This can also be seen as a benefit of the CLL (i.e., Figure 12). Main Results of Case Study The main results obtained through the analysis of the UBC CLL process case study are categorized as evolution of BPM evaluation documentation for unsolicited requests, evolution of BPMs of case projects after recognized failures and lessons learned, overview of proposed generic living lab processes, and key transferable characteristics from ethnographic study and BPM exercises. Business Process Modelling for Evaluation of Funding Opportunities UBC funding requirements are evaluated as a business case from UBC's board of governors' perspective. A BPM for this process was initially created by the Director of Strategic Partnerships Office. Revisions from the original model included condensing the submission and initial review phases into one process and the addition of a project time component to illustrate typical durations. Evolution of BPM Evaluation Documentation for Unsolicited Requests The BPM presented in Figure 12 is the process for unsolicited requests derived by this study. It provides an overview of how the CLL's business processes for unsolicited proposal requests have developed since September 2010 for unsolicited project plan submissions greater than CAD 2.5 million. Key documents were developed to support this process: a 12-slide "proposal summary" deck (Table 1) and a business plan were added in January 2012, and a spider chart analysis was included in June 2013 ( Figure 13). Slide # & Item Contents (1) Introduction Slide Project name, Company name, Company location, Company lead (2) Presentation Outline Slide headings of 3 to 12 on this list (3) Executive Summary How UBC helps achieve the company's corporate goals (4) Opportunity Positioning The key problem they are solving and why it is unlike any other product (5) Solution Overview Outlines the value proposition and core technology (6) Solution Example Describes how problems will be overcome (7) Program Plan Provides key resources, tasks, and milestones (8) Figure 13. Spider chart analysis for UBC CLL project selection, illustrating relative evaluation scores for a proposal for a variety of criteria. e@UBC opportunities (entrepreneurship at UBC opportunities). Evolution of BPMs of Case Projects After Failures Recognized and Lessons Learned The identification and mapping of the business processes used for the three representative projects shown in Section 5.2.1 were used primarily to inform general overall CLL BPM development. All three of these projects were completed and regarded as successful projects, although they all led to the identification of opportunities for lessons learned and business process improvements. Some examples include the following: • CIRS project process evaluation demonstrated how it is helpful to have design charrettes informing the project early to aid with technology decisions. Additionally, it Figure 13. Spider chart analysis for UBC CLL project selection, illustrating relative evaluation scores for a proposal for a variety of criteria. e@UBC opportunities (entrepreneurship at UBC opportunities). Evolution of BPMs of Case Projects after Failures Recognized and Lessons Learned The identification and mapping of the business processes used for the three representative projects shown in Section 5.2.1 were used primarily to inform general overall CLL BPM development. All three of these projects were completed and regarded as successful projects, although they all led to the identification of opportunities for lessons learned and business process improvements. Some examples include the following: • CIRS project process evaluation demonstrated how it is helpful to have design charrettes informing the project early to aid with technology decisions. Additionally, it was realized that linking funding with specific building components can reduce the potential for specific items (and project objectives) to be lost through value engineering, and having one decision-maker can streamline a project. • The Academic District Energy System project process evaluation showed how long of a process it can be to evaluate campus energy options and how both third-party consulting and the campus community can collaborate. As an outcome of this project, the Bioenergy Research and Demonstration Facility emerged, the evaluation process for which formed the initial basis of all future CLL project evaluation processes ( Figure 14). • The timing, scale, and participants of the Bioenergy Research and Demonstration Facility project caused this project to become a primary vehicle for the initial development of the CLL processes and the collections of early lessons learned. Immediately following the implementation of the Bioenergy Research and Demonstration Facility, eight themes for improvement emerged (stakeholder engagement, funding, managing expectations, legal, risk assessments, champions and project managers, due diligence, information sharing, and communications), and from these, 12 recommendations emerged: (1) "Expand public consultations process to include other elements of community engagement" (resources on engagement here: http://tamarackcommunity.ca/). Proactive consultation is required early and often. (2) Provide sufficient funding and/or resources for pre-feasibility and feasibility resources, project management, and due diligence and evaluation. Identify secure project funding earlier in the project life cycle to prevent a "moving target" when approaching the UBC board of governors. (4) Inform all stakeholders of process steps, key decisions, milestones, and all UBC expectations at the outset. (5) Identify and share expectations and needs of all stakeholder groups at the project outset. (6) Host project kick-off with all players. (7) Share the broad vision, knowledge, context, and objectives of the project, creating a consistent message and understanding of the project for all stakeholders. (8) Identify and adequately resource project managers and key champions within the organizations. (9) Assess all potential projects using technical and sustainable criteria, as well as against alternative possibilities to ensure adequate due diligence. Ask the right questions. Proposed Generic Living Lab Processes The goal of the UBC CLL is to improve sustainability by driving innovation that benefits its own operations and facilities, its core mission (research), and the industry partners' interests. The pursuit of this goal involves a particular focus on the demonstration phase of the technology readiness scale. This introduces a level of technological risk that is well beyond that of "tested" technological solutions, and this risk is significant because of the financial scale of campus infrastructure and facilities projects. Therefore, the business processes to manage and assess these projects are vital to achieve the CLL objectives while avoiding undue risks and project failures. It took a significant amount of time, as well as trial and error, to derive the tools necessary for the evaluation of high technology and high budgeted projects for technology readiness and strategy alignment. Overview of Proposed Generic Living Lab Processes A number of important attributes emerged from the case studies and analysis conducted under this research, including increasing support, aligning goals, improving processes, developing multi-stakeholder involvement, and developing strategic decisionmaking tools. In summary, UBC attempts to de-risk projects by leveraging UBC infrastructure investments with matching funds from industry and the government, by reducing potential liability on carbon taxes, and by using projects to contribute to research and teaching. Therefore, the funding for any incremental costs arising from CLL projects is sought from sources external to the University [46]. Technology accessibility and development is increasing, which could lead to more widespread adoption if the inhibiting barriers were reduced by removing organizational barriers and addressing the future improvement steps given in Figure 15. These recommendations were implemented, along with a significant increase in the number of reviews and stakeholder checks throughout the process (including the preliminary review of the company and proposed technology via a slide deck summary and spider chart analysis, as described previously). Figure 9 represents the BPM followed for the Bioenergy Research and Demonstration Facility project; Figure 14 represents the same process mapped on top of the revised BPM for unsolicited requests >CAD 2.5 million, where the steps shown in grey represent tasks, reviews, and checks not included in the original project. It can be seen that significantly more steps have been added, and leads to final BPM requirements for such projects. Proposed Generic Living Lab Processes The goal of the UBC CLL is to improve sustainability by driving innovation that benefits its own operations and facilities, its core mission (research), and the industry partners' interests. The pursuit of this goal involves a particular focus on the demonstration phase of the technology readiness scale. This introduces a level of technological risk that is well beyond that of "tested" technological solutions, and this risk is significant because of the financial scale of campus infrastructure and facilities projects. Therefore, the business processes to manage and assess these projects are vital to achieve the CLL objectives while avoiding undue risks and project failures. It took a significant amount of time, as well as trial and error, to derive the tools necessary for the evaluation of high technology and high budgeted projects for technology readiness and strategy alignment. Overview of Proposed Generic Living Lab Processes A number of important attributes emerged from the case studies and analysis conducted under this research, including increasing support, aligning goals, improving pro-cesses, developing multi-stakeholder involvement, and developing strategic decisionmaking tools. In summary, UBC attempts to de-risk projects by leveraging UBC infrastructure investments with matching funds from industry and the government, by reducing potential liability on carbon taxes, and by using projects to contribute to research and teaching. Therefore, the funding for any incremental costs arising from CLL projects is sought from sources external to the University [46]. Technology accessibility and development is increasing, which could lead to more widespread adoption if the inhibiting barriers were reduced by removing organizational barriers and addressing the future improvement steps given in Figure 15. Key Transferable Characteristics from the Ethnographic Analysis The quantitative analysis (Section 5.1) showed that the majority of the CLL Working Group's time is absorbed by tasks related to the development of opportunities, assessing the environment, and developing a vision, strategy, and assessment tools. It is a delicate balance to juggle these items while trying to remain on-course. To assist with strategizing, recommendations include a dedicated budget for the CLL Working Group and time allocated for a strategic retreat. The qualitative analysis provided a number of sub-themes, challenges, and partial solutions for further exploration. These are all meant to be a starting point for a rigorous analysis and business case development. The summary of key transferable elements and characteristics obtained by ethnographic study is provided as follows: Develop strategic documents: • • Implement a governance model to capture all groups who may potentially work on CLL projects Figure 15. Generalized future improvement plan for UBC CLL. Key Transferable Characteristics from the Ethnographic Analysis The quantitative analysis (Section 5.1) showed that the majority of the CLL Working Group's time is absorbed by tasks related to the development of opportunities, assessing the environment, and developing a vision, strategy, and assessment tools. It is a delicate balance to juggle these items while trying to remain on-course. To assist with strategizing, recommendations include a dedicated budget for the CLL Working Group and time allocated for a strategic retreat. The qualitative analysis provided a number of sub-themes, challenges, and partial solutions for further exploration. These are all meant to be a starting point for a rigorous analysis and business case development. The summary of key transferable elements and characteristics obtained by ethnographic study is provided as follows: Develop strategic documents: • • Link construction and operating cost into building budget • Incentivise deans to improve operational efficiency of buildings • Monitor energy usage of buildings, and ensure monitoring equipment is installed Key Transferable Characteristics from the Business Process Modelling Analysis of UBC CLL A summary of the key transferable elements and characteristics identified from the BPM analysis of UBC CLL program is as follows: (1) An organizational structure for the University Sustainability Initiative (USI); (2) A diverse multi-stakeholder committee membership structure; (3) A process of categorizing projects based on size (high-level view); (4) A process of project evaluation (due diligence) and approval (mid-level view); (5) Tools for project evaluation: slide deck and spider chart; (6) A process for selection of a research champion; (7) A process for selecting strategic partners; (8) Design goals and charrettes for high performance buildings; (9) An approach of linking funding to sustainable technologies so that they are not value-engineered out the equation; (10) Contests to solicit ideas for alternative energy; (11) The linking of feasibility studies to contests for the wider community to contribute ideas. The UBC CLL documents and business processes were generalized, adapted based on the key transferrable characteristics, and assembled into a proposal-Model Overview-of generalized CLL processes that might be implemented by other institutions interested in pursuing a CLL initiative. The items that make up this proposal are either documents (represented by a "D") or processes (represented by a "P"), as outlined in Figure 16. Discussion on Current Status of CLL Overhead costs and changes in the teams and groups of USI required simplification in the managerial staff and also caused restructuring in the existing administrative structure. As of October 2016, the CLL steering committee merged within the USI organization, and the working groups decided to become part of the bigger organization. The current organizational chart for the USI is undergoing restructuring. Figure 17 shows some of the updates based on limited information available through interviews conducted in spring 2017. Interviewees indicated that further organizational structure will be defined for the ongoing CLL activities, but these have yet to be developed. The new organization aims to merge the roles of the administrative director and the CLL working group management chair positions, and the new chair will provide leadership as the Sustainability Provost [46]. The USI has a steering committee that provides strategic guidance and oversight to UBC's campus-wide sustainability initiatives, including academic, research, operational, and policy decisions. The USI Steering Committee also works closely with a Student Sustainability Council and a Regional Sustainability Council. The Student Sustainability Council provide input on priorities in research and partnerships, teaching and learning, operations and policy recommendations, and meets twice a year with the Steering Committee [46] (See Figure 1). After the reorganization started in October 2016 by downsizing the central management in USI and CLL, the student sustainability council meetings are on hold, the faculty sustainability fellow meetings started in July 2017, and the regional sustainability council meetings are operating through informal channels for now [46].The Steering Committee met two times in the last fiscal year. According to the interviewee Giffin, the Research Discussion on Current Status of CLL Overhead costs and changes in the teams and groups of USI required simplification in the managerial staff and also caused restructuring in the existing administrative structure. As of October 2016, the CLL steering committee merged within the USI organization, and the working groups decided to become part of the bigger organization. The current organizational chart for the USI is undergoing restructuring. Figure 17 shows some of the updates based on limited information available through interviews conducted in spring 2017. Interviewees indicated that further organizational structure will be defined for the ongoing CLL activities, but these have yet to be developed. The CLL business processes that have been followed, which were initially substantial but somewhat ad-hoc in reaction to the demands of several large early CLL projects, have started to be increasingly formalized and clarified and were consequently not found to be applicable in many cases [46]. The main committees still function as illustrated in Figure 18 through the project's life cycle [46]. The new organization aims to merge the roles of the administrative director and the CLL working group management chair positions, and the new chair will provide leadership as the Sustainability Provost [46]. The USI has a steering committee that provides strategic guidance and oversight to UBC's campus-wide sustainability initiatives, including academic, research, operational, and policy decisions. The USI Steering Committee also works closely with a Student Sustainability Council and a Regional Sustainability Council. The Student Sustainability Council provide input on priorities in research and partnerships, teaching and learning, operations and policy recommendations, and meets twice a year with the Steering Committee [46] (See Figure 1). After the reorganization started in October 2016 by downsizing the central management in USI and CLL, the student sustainability council meetings are on hold, the faculty sustainability fellow meetings started in July 2017, and the regional sustainability council meetings are operating through informal channels for now [46].The Steering Committee met two times in the last fiscal year. According to the interviewee Giffin, the Research Operations and Emission Committees under the project Steering Committee (Figures 1 and 2) did not meet for an extended time and were disbanded [48]. The CLL business processes that have been followed, which were initially substantial but somewhat ad-hoc in reaction to the demands of several large early CLL projects, have started to be increasingly formalized and clarified and were consequently not found to be applicable in many cases [46]. The main committees still function as illustrated in Figure 18 through the project's life cycle [46]. A future step of the CLL program is to include a bioenergy facility using the infrastructure of the previous facility built in 2013 [48], which reflects the infrastructure readiness for an upcoming project through CLL. Conclusions Sustainability is a growing interest in the world, and UBC is developing a strategy of tackling some of the tough challenges related to improving efficiencies with energy pro- A future step of the CLL program is to include a bioenergy facility using the infrastructure of the previous facility built in 2013 [48], which reflects the infrastructure readiness for an upcoming project through CLL. Conclusions Sustainability is a growing interest in the world, and UBC is developing a strategy of tackling some of the tough challenges related to improving efficiencies with energy production, transmission, and consumption through technology adoption with the Campus as a Living Lab (CLL) program. UBC is at a scale that is large enough to prove that a technology could work for other campuses or municipalities, which can be a model for similar organizations or municipalities to follow for living lab programs and reduce their ecological footprint. Therefore, the UBC CLL program analysis has the potential to be a demonstrative example for all large organizations looking for managerial models for living labs and to showcase leadership in sustainability. As identified in the literature review, the need for a structured managerial model and standardized tools for decreasing the complexity of innovation activities and operational processes for living labs have been defined. Additionally, according to the theoretical framework, multi-stakeholders' divergent interests must be addressed by collaborative alignment strategies [26][27][28][29][30]. In this perspective, a roadmap illustration set for collaborative integration of multi-stakeholder projects was defined with this study. Additionally, lessons learned and related key transferrable characteristics for other institutions to benefit are shared in Sections 6.3 and 6.4. Through this extensive ethnographic research, answers to the main research questions are given by referring to multiple tools generated for structured managerial model of UBC CLL. Addressing the main research questions, results can be summarized as: • How do CLL Working Group members and Steering Committee members interact during meetings to mediate the problems on the integration process of multiple stakeholders' needs? Quantitative analysis of the coded meetings is given in Section 5.1, where the meeting coding plot is explained by Figure 3. Interactions between the CLL working group members and steering committee members are depicted in Figures 5 and 6. Assessing the environment and developing opportunities to test new sustainable technology are the key issues discussed in the flux. To address multiple stakeholders' needs without losing the attention to key issues, further documentation such as the slide deck (Table 1), business plan (no format set), and spider charts ( Figure 13) are required for project proposal requirements. • How do these meetings lead to a better process definition for future actions? These meetings enabled the trial-and-error of multiple tools derived for UBC CLL projects. Three case studies: CIRS, the Academic District Energy System, and the Bioenergy Research and Demonstration Facility shared through generated building process models (BPMs) specific to UBC CLL give reference cases to a building, an analysis of campus wide infrastructure option, and a facility project, respectively. They represent key projects to address UBC's long-term sustainability goals and cases to test the BPM method. For example, the Bioenergy Research and Demonstration Facility project acted as the main trial and error demonstration of BPMs generated for CLL projects. Necessary improvements in the process have been depicted to update the BPM in Figure 14. Looking at the added steps and documents required between the older ( Figure 9) and updated version (Figure 14) of this project's BPM, a step forward for better process definition and tools added can be identified with dark grey additions in Figure 14. • Is it possible to document the development of the project processes through different charts for enabling a replicable method for universities and cities? According to our ethnographic study findings, it is possible to document the development of the project processes through replicable charts and graphs. Although different organizations may have different structures and needs, the generic business process model legend ( Figure 4) has the potential to be implemented as a replicable model. BPMs shared through this study demonstrate the potentials of use in various project evaluation steps. The generic model overview ( Figure 16) illustrates the potential use of generated documents, CLL process model schemes of unsolicited request, and high-performance buildings' continuous improvement at managerial level, which can be adapted to multifarious phases of CLL needs in other organizations. BPMs aim to provide clear and detailed representations of the sequence of tasks associated with the CLL activities of identifying new project opportunities, evaluating project proposals, obtaining the institutional commitments for selected CLL projects, and the oversight of these projects' stakeholders' roles as they move through their life cycle phases ( Figure 18). BPMs have the potential to identify the task sequencing, the parties involved in each task, and some of the key documents or artefacts associated with these tasks. Additionally, the spider chart ( Figure 13) illustrates the relative evaluation scores for a proposal, which could be replicated in alignment with the set criteria of the adopting organization. While the extensive review process created through the UBC CLL aimed to be helpful in easing the process of evaluating high budgeted high-technology project proposals for the board of governors, the administrative structure of CLL needed to be re-evaluated with changing actors. According to the final interviews conducted [46,48], business process modelling used in some initial projects has not been used since then, and the proposed BPM sets were not being pursued at the date of interviews. The main contribution of CLL to UBC is defined as its culture-building activity to enhance collaboration in opportunities, where the campus can be used as a testbed with all of its resources, infrastructure and facilities [46]. It is believed that the ethnographic study and overall methodology with BPM analysis is a base to build on for UBC or any other organization interested in embedding sustainability at campus/municipality scale. Generalized future improvement plans for UBC CLL programs ( Figure 15) are a roadmap driven for CLLs in the future by the authors. Well-depicted processes reflect the problems transparently, where the improvement steps become easy to identify. Another main suggestion is the need for industry partners to be more involved in the operation process of projects partnered through CLL ( Figure 18). It was realized that the involvement of partners diminished in the operation phase, which started to change because the failures happening through trial-and-error steps needed fixing. Goals of fostering sustainability innovation and continuous development/optimization of processes applies to UBC and other organizations (other universities, cities, municipalities, living labs). Therefore, this research generalizes the UBC CLL business processes, tools, and lessons learned to develop a set of proposed generic living lab processes for all interested organizations. The main limitations of this study are the time limitation of the ethnographic study and availability of information transferred through the formal meetings. It is believed that discussions occurring regarding BPMs are ongoing through informal meetings cannot be tracked.
2021-05-10T00:02:55.810Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "722e0135414b426d8005576ce6387084da2ac259", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/4/1739/pdf?version=1613703135", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dcae5875a19bf53aff5e889bf8ad8680909d58f9", "s2fieldsofstudy": [ "Environmental Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
254927216
pes2o/s2orc
v3-fos-license
Perceived Barriers to Increasing Diversity within Oculofacial Plastic Surgery Purpose  Physician diversity is limited in ophthalmology and oculofacial plastic surgery. Determination of barriers within the application process for oculofacial plastic surgery may help target efforts to improve the recruitment of underrepresented groups. This study aimed to illuminate perceived barriers to increasing diversity in oculofacial plastic surgery trainees, according to the American Society of Ophthalmic Plastic and Reconstructive Surgery (ASOPRS) fellows and fellowship program directors (FPDs). Methods  During the month of February 2021, we sent surveys out to 54 current oculofacial plastic surgery fellows and 56 FPDs at 56 oculofacial plastic surgery programs recognized by the ASOPRS nationwide using a 15-question Qualtrics survey. Results  Sixty-three individuals (57%) responded to the survey: 34 fellows (63%) and 29 FPDs (52%). Eighty-eight percent of fellows and 68% of FPDs identified as non-underrepresented in medicine (UiM). Forty-four percent of fellows and 25% of FPDs identified as men. FPDs most commonly noted, “Not enough minorities applying to our program” and “The objective data (Ophthalmic Knowledge Assessment Program score, United States Medical Licensing Examination Step scores, clinical honors, Alpha Omega Alpha status, letter of recommendation) for minority applicants often do not meet the threshold required to offer an interview or to be ranked to match” as barriers. Among fellows, the lowest-rated considerations when applying to oculofacial plastic surgery were “Racially/ethnically diverse faculty” and “Perceptions of minority candidates by fellowship programs,” whereas “Likelihood of matching in program of choice” was ranked highest in considerations. Fellows identifying as men indicated greater concern for “Financial factors related to fellowship (e.g., loans, salary, cost of living, or cost of interviewing)” compared to fellows identifying as women who noted greater concern for “Program or preceptor acceptance of starting or having a family during fellowship.” Conclusion  Responses from FPDs suggest that efforts focused on recruiting and supporting diverse students to medicine and ophthalmology, mentoring applicants interested in oculofacial plastic surgery, and restructuring the application process to decrease bias, may improve diversity within the subspecialty. The lack of UiM representation in this study, 6% fellows and 7.4% FPDs identified as UiM, shows both the stark underrepresentation and the need for further research into this topic. Over the past several decades, the increasing diversification of the United States population has been documented across different communities. 1 Representation of individuals from groups historically underrepresented in medicine (UiM) -Black/African Americans, Hispanic/Latinx, Pacific Islanders, and Native Americans, Alaskans, and Hawaiians-has declined across multiple medical specialties when shifts in population composition are taken into account. 2,3 In 2021, approximately 25% of medical school matriculants identified as UiM (1.1% American Indian or Alaska Native, 9.7% Black or African American, 11.8% Hispanic, Latino or of Spanish origin, 0.4% Native Hawaiian or Other Pacific Islander). 4 Within ophthalmology, UiM are even further underrepresented, with roughly 6% of practicing ophthalmologists, 5.7% of ophthalmology faculty, and 7.7% of ophthalmology residents self-identifying as UiM, compared with the 30.7% that contribute to the U.S. population. 3 This trend percolates through the various ophthalmologic subspecialties as well, including retina and oculofacial plastic surgery, with only 4.2 and 9.4% of practicing specialists self-identifying as UiM, respectively. 5,6 This lack of representation can diminish access to adequate health care and drive health disparities. 7 Prior literature has shown that UiM physicians are more likely than their non-UiM counterparts to work in predominantly UiM communities, which are also more likely to experience physician shortages and the aforementioned disparities. 8,9 Moreover, a body of literature exists that has demonstrated a positive association between race-concordance and important aspects of the physician-patient interaction, including cultural competence, communication, and patient satisfaction. [10][11][12] In addition to racial and ethnic disparities, sex and gender disparities also exist in many surgical subspecialties. In ophthalmology, there has been some progress toward the representation of women over recent years. 13 Medical school enrollment of women was 52.7% in 2021. In 2018, 40% of ophthalmology residents identified as women and in 2019 there were about 27% women ophthalmologists. 13,14 Within the population of oculofacial plastic surgeons, there has been about 45.8% of fellows that were female since 2008 according to the American Society of Ophthalmic Plastic and Reconstructive Surgery's (ASOPRS) general membership. 13 However, only 22% of full professors in ophthalmology identified as women in 2019 and there is a known lack of women in ophthalmology leadership roles. 13 The lack of representation in leadership roles and academic positions impacts patient care as prior research has noted gender to influence patient counseling services, communication styles, and patient satisfaction. 5 In 2020, transgender and gender nonbinary (TGNB) people accounted for about 0.8% of medical school matriculants. 15 Representation of TGNB people in surgery and surgical subspecialties is lacking. General surgery and surgical subspecialties have been perceived as least accepting of sex and gender minority students when compared to other specialties contributing to significant barriers experienced by TGNB people. 16 Therefore, the authors surveyed ASOPRS oculofacial plastic fellowship program directors (FPDs) and fellows to identify barriers to increasing diversity within oculofacial plastic surgery. Methods During the month of February 2021, a 15-question Qualtrics survey (Qualtrics; Provo, UT) was electronically distributed to all oculofacial plastic FPDs and current fellows at 56 oculofacial plastic surgery programs recognized by the ASOPRS. Fellows were asked to rate 16 barriers on a 5-point Likert scale-1 (not at all concerned), 2 (slightly concerned), 3 (somewhat concerned), 4 (moderately concerned), and 5 (extremely concerned)-about how concerned they were about each barrier when deciding to pursue a fellowship in oculofacial plastic surgery. FPDs were asked to select all perceived barriers they believed precluded them from recruiting diverse trainees into their fellowship programs. Moreover, both fellows and FPDs were given the option to write in any barriers they deemed important but were not listed. The FPD survey portion was adapted from McDonald et al 17 due to the similar goals and precedent set as one of the only studies of its kind. In addition to perceived barriers, demographic data were also collected from all respondents; this included geographic region, gender identity, race/ethnicity, and household income during childhood. We did not collect which program each participant was part of to protect anonymity. During the study period, three reminders were sent out to all participants, and all data was captured, anonymized, and stored within Qualtrics. SAS (SAS Institute; Cary, NC) was used for data management and statistical analysis, with an alpha level of p < 0.05 used as the cutoff for statistical significance. All research activities for this study were deemed exempt from ethical review by the institutional review board at the University of California, San Francisco, CA. Data collection was Health Insurance Portability and Accountability Act compliant, adhering to the tenets of the Declaration of Helsinki. Conclusion Responses from FPDs suggest that efforts focused on recruiting and supporting diverse students to medicine and ophthalmology, mentoring applicants interested in oculofacial plastic surgery, and restructuring the application process to decrease bias, may improve diversity within the subspecialty. The lack of UiM representation in this study, 6% fellows and 7.4% FPDs identified as UiM, shows both the stark underrepresentation and the need for further research into this topic. Results A total of 63 individuals (57%) responded to the survey. Of those, 34 (63%) were current fellows and 29 (52%) were current FPDs. In terms of gender distributions, 44% of fellows identified as men, while 56% identified as women; no respondents selected nonbinary or "please list if not specified" gender choices. Eighty-six percent of FPDs identified as men, whereas 3% identified as women and 11% preferred not to answer. Of those who disclosed their racial/ethnic identity, 6% of fellows and 7.4% of FPDs identified as UiM (►Table 1). The geographic spread of respondents was fairly even across the four major regions of the U.S. Lastly, of those who disclosed their average childhood household income, the 56% of fellows and 40% of FPDs came from homes with an income greater than $100,000. Fellows reported the most concern about likelihood of matching when applying to oculofacial plastic surgery; namely "Likelihood of matching in the program of choice," "Likelihood of matching" overall, as well as the "Likelihood of matching in location of choice." Conversely, the trainees who responded were least concerned about the following topics when considering their application to oculofacial plastic surgery fellowships-"Racially/ethnically diverse faculty" followed by "Perceptions of minority candidates by fellowship programs" (►Table 2). Differences between UiM and non-UiM fellow responses were assessed. Compared to non-UiM fellows, UiM fellows reported greater concern for the elements comprising fellowship applications, including, "Competitiveness of Ophthalmic Knowledge Assessment Program (OKAP) score," "Required number of research projects/publications," and "Required number of Honors/Awards/Distinctions" when applying to oculofacial plastic surgery fellowship programs (►Table 3). (56) 1 (3) Prefer not to answer Prefer not to answer 1 (3) 2 (7) Other 0 0 Region, n (%) Prefer not to answer 2 (6) 9 (31) Note: Survey participant demographics of the American Society of Ophthalmic Plastic and Reconstructive Surgery fellowship program directors and fellows partaking in the appraisal of potential barriers to increasing underrepresented in medicine representation. Gender-based differences between fellow responses were also assessed. Among this sample, fellows identifying as men indicated greater concern for "Financial factors related to fellowship (e.g., loans, salary, cost of living, or cost of interviewing)" at the time of application to oculofacial plastic surgery compared to fellows identifying as women (p ¼ 0.06). Women fellows, however, noted greater concern for "Program or preceptor acceptance of starting or having a family during fellowship" (p ¼ 0.07) (►Table 4). These findings did not reach statistical significance. Three factors were most commonly identified as potential barriers among FPDs. These factors were, in order from most to least commonly cited, "Not enough minorities applying to our program," "Other perceived barrier(s) not listed above," and "The objective data (OKAP exam score, United States Medical Licensing Examination [USMLE] Step scores, clinical honors, Alpha Omega Alpha status, letter of recommendation [LOR]) for minority applicants often do not meet the threshold required to offer an interview or to be ranked to match" (►Table 5). Respondents that selected "Other perceived barrier(s) not listed above" were provided space to elaborate on their selection. Submitted responses fell into three thematic categories. The first emphasized a lack of UiM resident mentorship in the field of oculofacial plastic surgery, while the second posited a potential geographic barrier in which UiM candidates are heavily recruited by coastal programs, leaving few options for programs in the Midwest. The third and final category suggested that these FPDs perceived no unique barriers to oculofacial plastic surgery faced by UiM applicants; these programs also did not rank applicants based on characteristics of identity (e.g., gender, race, sexual orientation). Discussion The purpose of this study was to investigate the perceived barriers to increasing diversity within the field of oculofacial plastic surgery from the viewpoints of ASOPRS fellows and FPDs. According to the surveyed FPDs, the main perceived barriers to increasing diversity in the oculofacial plastic surgery workforce are a lack of UiM applicants and the objective metrics of UiM not meeting required threshold levels (e.g., OKAP scores). The lack of applicants into the field of ophthalmology poses a significant problem in the recruitment pipeline. As medical school exposure to ophthalmology declines, fewer students have applied to ophthalmology in recent years than in the past. 18,19 Developing methods to recruit underrepresented people into medical school and then ophthalmology is key, followed by mentorship and support for those interested in oculofacial plastic surgery. Residency programs outside of ophthalmology have shown success in recruiting UiM residents by establishing and maintaining institutional financial support to develop and sustain respectful partnerships with communities and schools. These partnerships include pipeline programs for all education levels, programs that actively engage student advisors, and intentionally investing in the community through means of employment. 20-23 Some FPDs noted in their survey the lack of UiM mentorship in the field of oculofacial plastic surgery. Previous findings show that representation and visibility of diversity among residents and faculty members including having a UiM mentor have been helpful for trainees to garnish a stronger interest in academic careers as well as disrupt stereotypes. 24 This is particularly important in the field of oculofacial plastic surgery as fellows can work with a single preceptor and no co-fellow peers in the span of the 2year training program. Encouraging our ophthalmology faculty to mentor UiM medical students and ophthalmology residents interested in oculoplastic surgery, or other specialties, so they are highly competitive is important. Continued mentorship in fellowship and beyond to develop a successful career after fellowship is also essential. Fortunately, pathway programs such as the Minority Ophthalmology Mentoring program and Rabb Venable are already making strides in UiM representation to ophthalmology. These Other perceived barrier(s) not listed above 6 (14) The objective data (OKAPs score, USMLE Step scores, clinical honors, AOA status, LOR) for minority applicants often do not meet the threshold required to offer an interview or to be ranked to match 5 (12) None are applicable 4 (9) We consistently rank minority applicants high but can never seem to match them 3 (7) We do not have enough minority faculty 1 programs expose interested UiM medical students to the field of ophthalmology, pair students with a mentor, and provide resources to achieve success on standardized exams (i.e., USMLE Step 1) and throughout the residency application process. Similar programs in other specialties, such as the Perry Initiative and Nth Dimensions, designed to increase representation in the highly competitive specialty of orthopedic surgery for women and UiM, respectively, have found tremendous success boasting a 96% match rate for participants. 25 The FPDs' other reported perception, that UiM trainees often do not meet required thresholds for objective metrics, suggests the need for structural changes in the fellowship application process. While UiM individuals have been found to have lower test scores compared to their non-UiM counterparts, this more reflects generations of structural and interpersonal racism and bias than individual underachievement. [26][27][28] To take steps to combat racism on the interpersonal level, unconscious bias training can be useful. 26 This training may also be supportive in exploring beliefs motivating the comments around "no unique barriers faced by UiM applicants and consideration of gender, race/ethnicity into the application process." This outdated, biased, and ineffective "color blind" philosophy directly impedes intentional actions to increase representation by UiM and gender minority physicians. Additionally, admission committees have been shown to demonstrate pro-white bias; thus, recruitment of more UiM physicians, who are less likely to demonstrate pro-white bias, into these committees, may also increase representation. 26,29 National organizations have taken great steps toward encouraging holistic review, with the USMLE changing the Step 1 exam to pass/fail and the American Academy of Ophthalmology releasing a position statement that OKAP scores should not be used for decision-making in fellowship application. 27 In addition to the use of test scores, the use of LORs has been shown to be of high importance in fellowship selection within the field of ophthalmic plastic and reconstructive surgery; however, implicit bias found within LORs can present an additional barrier to applicants not meeting the required threshold. [29][30][31][32][33] Suggestions for addressing this barrier include the use of freely available artificial intelligence tools when writing letters as well as the use of a standard LOR as currently used in the field of emergency medicine, orthopaedic surgery, and otolaryngology. 29,34 The holistic review of an application has been shown to be beneficial to both individuals and programs and the use of a task force can help streamline the process of holistic review for graduate medical education programs. [35][36][37] Interestingly, when comparing responses from men and women fellows, women fellows were more concerned about starting a family and the perceptions of program leadership surrounding this, whereas men fellows were more concerned about the financial aspect of fellowship (e.g., salary, loans, etc.). These findings mirror those identified in other specialties. Cochran et al found that woman surgeons were far more likely than their man counterparts to agree that having children would be a career impediment and less optimistic that they could overcome child-rearing-related career barriers represented by their desire to have children. 38 Prior studies have shown the additional work done by physician mothers per day in comparison to fathers and the impact that can have on work-life balance and burnout. 39,40 Structural changes in programs such as increased control in scheduling, practice efficiency improvements, gender-specific mentorship, and home-life directed interventions (i.e., onsite or readily accessible high-quality backup childcare) can help promote gender parity. [39][40][41][42] In assessing the current state of diversity within oculofacial plastic surgery, it is also important to note progress relative to other medical specialties. Studies have observed that 7.2% of practicing dermatologists, 5.8% of practicing vascular surgeons, and 6.8% of practicing orthopaedic surgeons identify as UiM, compared to 9.4% in oculofacial plastic surgery. 5,43-45 Furthermore, the UiM composition of the U.S. medical student body sits roughly around 15%. 46 Though also quite far below national representation, these numbers are encouraging in that they suggest oculofacial plastic surgery has forward movement and that perhaps the deficiency in the "pipeline" lies further upstream of the medical student level. However, it is not intended to elicit complacency as overall UiM representation, especially when compared with the demographics of the patients we serve, is still lacking. Our study has several limitations. First, this is a surveybased study with a response rate of only 57% of current ASOPRS fellows and FPDs which could result in sampling bias; despite geographic diversity, perhaps the perspectives of individuals already dedicated to increasing UiM representation within oculofacial plastic surgery were overestimated. Additionally, while the survey instrument has been used by authors from another medical specialty, the survey has not been formally validated. A third notable limitation of this study was the limited UiM fellow and FPD representation in the response pool. The majority of those that responded identified as either White or Asian; only two ASOPRS fellows and two FPDs selfidentified as UiM. This lack of UiM representation in the survey likely affects the survey responses. For example, although the current fellowship pool did not rate "Racially/ethnically diverse faculty" and "Perceptions of minority candidates by fellowship programs" highly, these metrics may be important for the recruitment of UiMs and may be ranked differently if the make-up of fellows had greater UiM representation. This clearly poses a significant obstacle when trying to better understand the barriers faced by this group. However, this also perfectly illustrates the lack of representation among current ASOPRS fellows and FPDs. Collecting responses from ophthalmology residents and medical students interested in ophthalmology-though the percentage of UiMs is also restricted in this population-may help to elucidate some of the upstream barriers dissuading UiMs from pursuing oculofacial plastic surgery fellowship training. Additionally, our study did not have representation from TGNB people in those that responded to the survey. Future studies focused on understanding barriers faced by TGNB people specifically are imperative to further increasing diversity in the field of oculofacial plastic surgery. To the authors' knowledge, this is the first study examining barriers to the field of oculofacial plastic surgery for UiM and woman trainees. Given this, future studies should replicate with a larger, more diverse cohort, and potentially explore the perspectives of UiM and woman medical students and ophthalmology residents. These findings highlight the need for bolstered efforts focused on the recruitment of UiMs and woman to ophthalmology and oculofacial plastic surgery, as well as supporting holistic review of residency and fellowship applicants. By taking intentional, evidencebased steps, FPDs and division chiefs may improve UiM and woman representation within the discipline of oculofacial plastic surgery.
2022-12-22T14:04:00.955Z
2022-02-19T00:00:00.000
{ "year": 2022, "sha1": "9ae08b612f0a7742a452c350414e34fe3158e8b6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1055/s-0042-1758561", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6217cd76ce511e862e40d78697ff7a0dbb8751bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158742648
pes2o/s2orc
v3-fos-license
Large, climate-sensitive soil carbon stocks mapped with pedology-informed machine learning in the North Pacific coastal temperate rainforest Accurate soil organic carbon (SOC) maps are needed to predict the terrestrial SOC feedback to climate change, one of the largest remaining uncertainties in Earth system modeling. Over the last decade, global scale models have produced varied predictions of the size and distribution of SOC stocks, ranging from 1000 to >3000 Pg of C within the top 1 m. Regional assessments may help validate or improve global maps because they can examine landscape controls on SOC stocks and offer a tractable means to retain regionally-specific information, such as soil taxonomy, during database creation and modeling. We compile a new transboundary SOC stock database for coastal watersheds of the North Pacific coastal temperate rainforest, using soil classification data to guide gap-filling and machine learning approaches to explore spatial controls on SOC and predict regional stocks. Precipitation and topographic attributes controlling soil wetness were found to be the dominant controls of SOC, underscoring the dependence of C accumulation on high soil moisture. The random forest model predicted stocks of 4.5 Pg C (to 1 m) for the study region, 22% of which was stored in organic soil layers. Calculated stocks of 228 ± 111 Mg C ha−1 fell within ranges of several past regional studies and indicate 11–33 Pg C may be stored across temperate rainforest soils globally. Predictions compared very favorably to regionalized estimates from two spatially-explicit global products (Pearson’s correlation: ρ = 0.73 versus 0.34). Notably, SoilGrids 250 m was an outlier for estimates of total SOC, predicting 4-fold higher stocks (18 Pg C) and indicating bias in this global product for the soils of the temperate rainforest. In sum our study demonstrates that CTR ecosystems represent a moisture-dependent hotspot for SOC storage at mid-latitudes. Introduction Accurate global soil organic carbon (SOC) maps are necessary to validate terrestrial carbon (C) cycle predictions in Earth System Models (Todd-Brown et al 2013) however current SOC models drawing on international pedon (soil profile) databases (Soil-GRIDs, HWSD, ISCN, etc) display considerable differences (Köchy et al 2015, Sanderman et al 2017, 2018a. Database construction, including filling data gaps, account for some of these discrepancies (Tifafi et al 2018) while other sources of uncertainty are associated with scaling up spatially from relatively sparse pedon observations (∼1 m 2 ) to globally gridded products (Todd-Brown et al 2013) and the loss of information relevant to SOC storage at intermediate scales (10 m 2 -1 km 2 ) such as landscape topography Riley 2015, Siewert 2017). Regional digital soil mapping may help bridge these scale discontinuities and produce finer resolution (<100 m) predictions that retain information on the spatial drivers of SOC storage (Minasny et al 2013). For example, Sanderman et al (2018b) compiled mangrove SOC stock measurements and used them in conjunction with a global SOC map and maps of environmental covariates to estimate global mangrove SOC at 30 m resolution. The integration of detailed pedological information with machine learning approaches for large-scale spatial predictions may also enable improvements in SOC mapping (Ramcharan et al 2017) and regional SOC assessments may help diagnose errors in global products by providing higher resolution information on SOC controls and its distribution. Regional SOC assessments have, to date, focused on Arctic and boreal permafrost soils (Hugelius et al 2013(Hugelius et al , 2014, while coastal temperate rainforests (CTR) have not received similar attention despite their similarly high SOC storage (Carpenter et al 2014). Globally, temperate rainforests contain the highest density aboveground forest C stocks (up to 1500 Mg ha −1 , Keith et al 2009), and can be found along the coastal margins of North and South America, Japan and Korea, Australasia, and Scandinavia (Alaback 1991). The N. Pacific coastal temperate rainforest (NPCTR) biome is the largest example, and spans 4000 km of the N. American coast from the Russian River in California, to Kodiak Island in the Gulf of Alaska (Della-Sala 2011). Although several studies have produced regional estimates of SOC stocks in Alaska (Leighty et al 2006, Johnson et al 2011, Mishra and Riley 2012, no studies to date have produced spatially-explicit SOC stock estimates across the transboundary domain of southeast Alaska (SEAK) and coastal British Columbia (BC). Soils of the NPCTR can store large quantities of SOC, especially in the wet seasonal and perhumid zones (Carpenter et al 2014), with stocks>300 Mg ha −1 (to 1 m mineral soil depth) frequently observed in SEAK (Johnson et al 2011) and stocks of >200 Mg ha −1 common in coastal BC (Shaw et al 2018). These large SOC stocks have accumulated in distinctive soil conditions across the NPCTR's mosaic of three hydropedologic landscape units (Lin et al 2006): (1) upland forest soils on well-drained slopes, (2) forested wetlands, and (3) poor (lowland) fens (Neiland 1971, D'Amore et al 2015. Despite their relatively young age (12-14 cal ka BP, Eamer et al 2017), elevated C concentrations are observed in mineral soils that often exceed 1 m in depth (Chandler 1943, Michaelson et al 2013 due to a combination of rapid mineral weathering, high primary production and litter inputs, and the translocation of soluble C into deeper horizons (Alaback 1991). In addition to mineral soils, the perhumid NPCTR also exhibits a variety of vertically-accreting organic soils, including deep (3-5 m) peat-forming bogs and fens (Heusser 1952, 1954, Hansen 1955, Ugolini and Mann 1979, and thick (>40 cm) forest floor organic horizons that accumulate due to slow decomposition under ubiquitous hydric soil conditions and the rarity of fire (Alaback 1991). In places, deep (> 40 cm) organic horizons overlay C-rich mineral soils (known as Folisols, or folistic horizons) and contribute to the highest C stocks in the NPCTR (D' Amore and Lynn 2002, Fox and Tarnocai 2011, Johnson et al 2011, Michaelson et al 2013. Quantifying total SOC storage across the NPCTR and understanding its environmental controls is necessary to predict the region's response to global change, including climate feedbacks. Observational data (Buma and Barrett 2015) and ecosystem C models (Genet et al 2018) indicate that the NPCTR is sequestering C. Changes in the amount and form of precipitation and higher temperatures may increase growing season length and productivity (Buma et al 2016) however soil warming may lead to more rapid decomposition of soil organic matter (Davidson andJanssens 2006, Fellman et al 2017). In the present study we address the need for a unified SOC model for the NPCTR by compiling a new transboundary pedon database across SEAK and coastal BC that retains relevant pedological details. With this database we train a predictive model to estimate total SOC stocks spatially across the region, to enable meaningful comparisons with other regional and global SOC products, and to explore the environmental controls on SOC in the NPCTR. Study extent and characteristics The largest climatic zones within the NPCTR are the seasonal and perhumid forests that form a transboundary extent across SEAK and coastal BC (Alaback 1991). The SOC assessment encompassed all of the perhumid and part of the seasonal zone, spanning 10°of latitude (figure 1). The study perimeter was defined by the outer boundary of rainforest-dominated watersheds mapped using a harmonized transboundary dataset (Gonzalez Arriola et al 2018) between the Fraser River in Vancouver, BC and Lituya Bay south of Yakutat, Alaska, excluding the four major river basins (Taku, Stikine, Nass, Skeena) which extend into interior boreal forest and a more continental climate. Mean annual precipitation across the study domain ranges from 1800 to >3000 mm and mean annual temperatures range from 6°C to 9°C, with monthly means of −5°C in winter in the north (Farr and Hard 1987) and ∼15°C in summer in the south (Alaback 1991). Forest species diversity is relatively low, reflecting a consistent climate and disturbance regime across the study area, and generally dominated by Picea sitchensis (Sitka spruce) and Tsuga heterophylla (western hemlock) in SEAK (van Hees 2003). In BC, Tsuga heterophylla and Thuja plicata (western redcedar) become the dominant conifers. Callitropsis nootkatensis (yellow cedar) and Tsuga mertensiana (mountain hemlock) are found from sea level in the north to high elevations in the south and Pinus contorta var. contorta (shore pine) is a significant component of bog locations throughout. Geology is varied consisting of granitic, basaltic, and limestone bedrock, the latter of which supports some of the most productive forest, however, much of the surficial geology is dominated by glacial drift including ablation and compact till, alluvial outwash, and glaciomarine sediments (Nowacki et al 2003). Transboundary SOC database We compiled a transboundary database of > 1300 soil profile descriptions (pedons) across SEAK and BC from published and archive data sources. For each pedon we calculated SOC stocks for the top 1 m of mineral soil plus surface organic horizons using data harmonization and gap-filling procedures that are detailed in the supplementary information (supplementary tables 1-5). The database is available online at: https:// datadryad.org/resource/doi:10.5061/dryad.5jf6j1r. In brief, US soil classification was converted to Canadian where necessary and gaps filled with published values or modeled estimates grouped by soil class, horizon, and lithology. In contrast to some other regional and global C assessments, this approach avoided use of generalized empirical relationships between soil properties and missing variables, such as between soil C and soil bulk density, or soil C and depth. Environmental covariates Environmental covariates were selected (supplementary table 6) to predict SOC stock due to their relationship with soil forming factors (climate, organisms, relief, parent material, and time; Jenny 1994). Covariate data were extracted from the rasters at the pedon coordinates and appended to the final SOC stocks (in supplementary material) to use in all further analyses. Further details of the 12 selected environmental covariates along with justification for inclusion and pre-processing steps are listed in supplementary table 6. Briefly, only high quality and spatially continuous data products were used. Curating covariates based upon knowledge of regional soil development facilitates clearer interpretation and reduces the risk of autocorrelation between variables. Random forest model A random forest model was trained to predict stocks of SOC across the NPCTR in R (v.3.4; R Core Team 2018 (www.R-project.org)) using the R-package random-Forest (4.6; Liaw and Wiener 2002). Random forests grow a large number of regression trees (Breiman et al 1984) from different random subsets of training data and predictor variables, thereby reducing variance relative to single trees, and greatly reducing the risk of over-fitting model predictions and non-optimal solutions-though at the cost of interpretability (Breiman 2001). The transboundary database SOC stocks and associated covariates were first split into training (80%) and testing (20%) data and the model was parameterized to grow 5000 trees. For each tree, a subsample equivalent to ¼ of the total sample size was utilized (with replacement). Node size was set at 4 to minimize the out-of-bag error based on preliminary testing. Model performance was measured from goodness-of-fit, distributions of residuals, and predictions of test SOC stocks. Confidence intervals were computed using an infinitesimal jack-knife procedure (Wager et al 2013). Predictions were made across the NPCTR study extent using R-package raster (v2.6; Hijmans 2017) which produced a SOC map at 90.5 m resolution. All lakes >10 ha were clipped from the final map (HydroLakes, Messager et al 2016) and glacier area was clipped using the Randolph Glacier Inventory 5.0 (GLIMS, Raup et al 2007) database. Final SOC stocks were adjusted for topography by scaling the SOC map with actual land surface area calculated from cell slope values. The random forest model was re-run for the three gap-filling sensitivity analyses (see SI-the final SOC map is available at: https:// datadryad.org/resource/doi:10.5061/dryad.5jf6j1r (Dryad)). 2.5. Comparison to regional and global maps Stocks of SOC were compared with two previous Alaskan studies, two regional/national models, and two global models (table 1). Published summary statistics for NPCTR regions were either referred to directly (Johnson et al 2011) or estimated from published data (Michaelson et al 2013). The Canadian SOC map produced by Tarnocai and Lacelle (1996) was regionalized, rasterized, and resampled to extract pixels that overlapped with the study boundary and methods to calculate mean and total SOC stocks were replicated (available at: http://sis.agr. gc.ca/cansis/interpretations/carbon/index.html). Two global SOC maps, SoilGrids250m and the Global Soil Organic Carbon map (GSOC, FAO and ITPS 2018), were downloaded as rasters and resampled from 250 m and 1 km resolutions, respectively. SoilGrids 250 m was built from a database of ca. 150 000 soil profiles and a stack of 158 covariates to produce a continuous global surface of SOC stock to 1 m, whereas the FAO GSOC map is a composite of national SOC stock assessments and covers a depth of 0-30 cm. Genet et al (2018) estimated SOC across the N. Pacific Landscape Conservation Cooperative using pedons from relevant forest cover types in SEAK. Differences in SOC stocks are explored quantitatively in the context of different extents, gap-filling procedures, and data sources. Finally, the predictive accuracy and bias of the random forest model, is compared to two global SOC products (SoilGrids250m and the FAO GSCO map). Spatial controls of SOC To explore controls on SOC stocks across the NPCTR, classification and regression trees (CART) was applied to the transboundary SOC database using R-package rpart (v4.1, Therneau and Atkinson 2018). Unlike weak learner regression trees grown in random forests, CART analyses fit to entire datasets provide readily interpretable outputs. CART is also well suited to interpretation of complex data with many interacting variables, non-normally distributed data, and can identify key covariate interactions and thresholds. Summary of PCTR observations Pedon SOC stocks and depths were log-normally distributed (supplementary figures 2(a)-(d)). Median soil depth across all the samples was 66 cm and median calculated SOC density was 168.4 Mg ha −1 . Other database summary statistics are provided in supplementary table 7. Soil classes in the pedon database were mostly Spodosols (426), Inceptisols (214), and Entisols (84), with fewer Histosols (70) and Folists (9). Sample locations were generally well distributed across the study extent with some clustering around S Vancouver Is., N BC, and central SEAK (figure 1; supplementary figures 2(e) and (f)). The distributions of the environmental covariate data extracted at pedons were generally very similar to the distributions of covariates across the region (supplementary figures 2(e)-(n)). Samples were slightly biased to lower and less steep areas, and the presence of large icefields and high alpine (not sampled) explained discrepancies in percent forest cover and land cover classes. Random forest SOC model performance Model performance was strongest for larger scale patterns in SOC. Though predictive performance on test data by the random forest model was low (R 2 =0.32), the model covariates were representative of the region (supplementary figure 2), the mean of the residuals was zero, and largest errors were underestimations in areas otherwise correctly predicted to have higher than typical SOC (supplementary figure 3). We therefore have high confidence in model predictions for the regional scale patterns in SOC, with less confidence for variation at finer spatial scales. The predictions of the random forest in this study were more accurate (figure 2) compared to those extracted from two global products SoilGrids250 and the FAO GSOC map at the same locations (figure 2). Estimates of SOC stocks Total SOC within the NPCTR of SEAK and BC was estimated at 4.5 Pg C (table 1) with highest stocks (> 500 Mg ha −1 ) found in the central islands of SE Alaska and westerly locations, and lower stocks (<200 Mg ha −1 ) predicted for more southerly and easterly locations. Sensitivity analyses indicated that SOC stock estimates were most sensitive to bulk density gap-filling assumptions as estimates increased by approximately 50% after increasing organic horizon bulk density ca. 3-fold to 0.33 g cm −3 (supplementary table 5). From the fractional increase in SOC caused by the tripling of organic horizon bulk density, we computed that 22% of the predicted NPCTR SOC stocks must be stored in organic soil horizons. Environmental covariates of SOC stock CART analysis ( figure 3) showed that the lowest SOC stocks ranging from 128.5 to 194.8 Mg ha −1 were associated with the driest (<2147 mm MAP), southeasterly locations. Intermediate stocks (252.7-442.9 Mg ha −1 ) were assigned to wetter climates at higher topographic positions (upslope). Very high SOC stocks (336.0-523.3 Mg ha −1 ) were also associated with wet climate areas (2147-2833 mm) on foot-slope (downslope) landscape positions. Finally, exceptionally high SOC stocks of 446.2-708.6 Mg ha −1 were assigned to the wettest climates (>2833 mm) at relatively low elevations (<189 m). SOC stocks in the global context The estimated 4.5 Pg C stored within perhumid and the northern seasonal NPCTR watersheds indicates the region contains approximately 2% of North American SOC within less than 1% of its surface area (Köchy et al 2015). Using a simple upscaling from study region mean stocks (228±111 Mg ha −1 ) to global CTR extent (ca. 9.7×10 5 km 2 ; Alaback 1991) we can estimate that 22±11 Pg C may be stored globally in CTR ecosystems. These estimates are likely conservative due to our 1 m depth range, our assumption that the coarse fraction is entirely mineral (Zabowski et al 2011), the abundance of deep (3-5 m) peat-forming fens that can form in wet landscape depressions (supplementary figures 5 and 6) that are smaller than our spatial resolution (Heusser 1952, D'Amore andLynn 2002), as well as the likely occurrence of cryptic wetlands hidden within forests (Creed et al 2003). In a review of global SOC, Jackson et al (2017) calculated the first biome-specific SOC stocks, estimating that 64 Pg C is stored in ca. 6 M km 2 of non-permafrost soils in temperate conifer forests. Our results disaggregate this result further, suggesting CTRs within the temperate conifer forest biome contain one-third of the total SOC, while representing less than onesixth of the biome's area. Jackson et al (2017) also estimated that 22% (14 Pg) of the SOC was stored in organic peatlands, which matched exactly our estimate of the proportion of SOC in organic soil (peatlands and surface organic accumulations). The agreement, while remarkable, is not truly scalable, but it does likely reflect a common suite of C input and stabilization mechanisms in cool, wet temperate conifer forest ecosystems that may lead to consistent partitioning of stocks between mineral and organic soils. As has been demonstrated for aboveground biomass (Keith et al 2009), SOC densities in the CTR appear to rank among the highest globally. The mean SOC stock estimate from this study (228± 111 Mg ha −1 ) positions the NPCTR below estimates for permafrost soils (178-691 Mg ha −1 ; 5th-95th percentile), but substantially higher SOC densities than grasslands (56-289 Mg ha −1 ), evergreen broadleaf forests (83-223 Mg ha −1 ), and croplands (60-200 Mg ha −1 ), and within a similar range as permanent wetlands (114-474 Mg ha −1 , Sanderman et al 2018b). Our results also suggest SOC densities to 1 m in temperate rainforests are higher than in tropical rainforests (85-271 Mg ha −1 ) perhaps in part due to litter accumulations which are typically absent in tropical rainforest floors due to very favorable conditions for decomposition (Parton et al 2007). Model comparisons Estimated SOC stocks agreed with some past estimates of SOC storage in Alaskan coastal rainforests (table 1). The two regional/national studies that approximate SEAK (Tongass National Forest, Leighty et al 2006) and BC (Tarnocai and Lacelle 1996), when summed, produced an estimate of 5.3 Pg C compared to our estimate of 4.5 Pg C. However, Tarnocai and Lacelle (1996) integrated SOC to the full observed depth of organic soils which was found to be approximately ∼1.51 m, or ∼50% greater than the reference depth in this study (1 m), which may explain the larger estimate. The GSOC map estimated lower SOC stocks (2.5 Pg C) because it only considers the top 30 cm of soil, but if it is assumed that ca. 50% of the SOC stock is stored from 30 to 100 cm (James et al 2014), then the estimates (∼5 Pg C) align well with the present study. For one study and two global SOC products we identified large discrepancies with our SOC predictions. The global model SoilGrids250m and the regional Alaska database produced by Michaelson et al (2013) were outliers in our comparison, predicting 4-fold higher total SOC for the region, and 2 or 3-fold higher mean SOC, respectively (table 1). Our model also more accurately predicted the spatial variation in SOC across the NPCTR relative to the FAO GSOC map and SoilGrids250m (figure 2). Both global products showed strong bias for the region, with overestimates where we predict lower stocks and weak correlation with the observed variability overall. Finally, unrealistic spatial discontinuities are present in the FAO GSOC map at the US-Canada border that did not exist in our transboundary assessment (figure 5). Global SOC maps created from a mosaic of national inventories clearly benefit where nations conduct quality SOC assessments, evidenced by the reasonable summed stock estimates of GSOC for the NPCTR (table 1). However, we propose that biomespecific assessments are better than national inventories because spatial discontinuities that form within global mosaics will fall along ecologically significant, rather than arbitrary political, boundaries (Ramcharan et al 2017). The bulk density gap-filling procedure applied by Michaelson et al (2013) was not replicated in this study due to the observation that the pedotransfer functions (Kranabetter and Banner 2000). Direct comparison of our database with that of Michaelson et al (2013) illustrates how gap-filling procedures can lead to very different SOC estimates. Similar issues may underlie discrepancies observed with SoilGrids250m (18 Pg C; table 1). Models built using the Harmonized World Soil Database and SoilGrids250m have been gap-filled using pedotransfer functions that may overestimate organic soil bulk density and lead to overestimated SOC stocks (Köchy et al 2015). A recent global model comparison found much larger SOC stock estimates using the SoilGrids database than from other global databases (Tifafi et al 2018). Our study similarly suggests SoilGrids250m overestimates SOC stocks within the NPCTR, and possibly in other organic and/or high-latitude soils (figure 2(c)). We cannot however explain the differences with organic soil bulk density alone, based upon our sensitivity analysis where bulk density was tripled. We therefore propose that the juxtaposition of highly contrasting soils in the NPCTR (supplementary figures 5 and 6) may make the region particularly susceptible to SOC errors when aggregating and modeling pedon observations. In the NPCTR, C-rich litter layers and organic soils lie adjacent (vertically and laterally) to mineral soils (Michaelson et al 2013, Shaw et al 2018 and, without separate representation during gap-filling or modeling steps values may be artificially inflated. Populating the database and calculating SOC on the basis of pedological information, including distinguishing surface organic from subsurface mineral horizons, may have improved SOC variable estimation in this study. Covariates of NPCTR SOC stocks Digital soil mapping assumes properties, such as SOC, can be predicted spatially across landscapes from the distribution of geospatial covariates related to the classical factors of soil formation (Jenny 1994, Minasny 2013. We found that high precipitation is the primary control on SOC storage in the NPCTR; SOC stocks tracked regional gradients in MAP, and longitude, with the highest stocks in the north coast of BC and central SEAK (figures 3 and 4). Topographic attributes including elevation, wetness, and slope position, which modulate temperature and soil moisture conditions, also emerged as important controls. Land cover was not a strong predictor, however, both the region and pedon database, are dominated by conifer forest which does not distinguish between upland soils and forested wetland coverage. Though lithology has been shown to be important across Alaska generally as a predictor of SOC stocks (Mishra and Riley 2012), we did not find support for lithology as a broadly important covariate. However, lithology is an imperfect indicator of parent material for soil formation across the region due to the extensive presence of glacial deposits. Vulnerabilities of NPCTR SOC stocks Stocks of SOC in the NPCTR may be sensitive to several climate-related changes in the coming century, but the overall direction of effects is uncertain. Wolken et al (2011) highlighted loss of winter snow and ice as the most important biophysical change in the CTRs of Alaska, driven by projected average temperature increases of 3.5°C±1.5°C by 2100. Based upon the primacy of MAP and topographic wetness in our analyses, both higher predicted MAP and a reduction in the proportion as snow (Shanley et al 2015) may expand the spatial and temporal domain for high SOC accumulations in the NPCTR. However, this may be balanced by increases in lateral exports of terrestrial DOC, which is already a distinctively large component of NPCTR ecosystem C budgets (Oliver et al 2017). Similarly, effects of temperature may be bi-directional. Elevated temperatures lead to rapid decomposition of NPCTR soil organic matter under laboratory conditions (Fellman et al 2017) however it is unclear to what degree this effect will be limited by the saturated soil conditions in situ, which constrain decomposition rates (Freeman et al 2001), or offset by concurrent increases in SOC inputs via enhanced primary productivity and litterfall (Buma et al 2016, Genet et al 2018. SOC modeling considerations Our study shows that digital soil mapping can be valuable across the NPCTR where soil survey and conventional soil mapping is challenging (Carpenter et al 2014), however a baseline of high quality pedon data is still essential for accurate predictions. Vitharana et al (2017) found that existing data for SE Alaska well represented environmental variability and our covariate data distributions (supplementary figure 2) agree with this conclusion, however, much of the central and northern BC coastline is less well sampled (figure 1) and those data we did obtain required extensive gap-filling. Our model also under-predicted the largest SOC stocks which, as random forests subset data to grow each tree, may be due to relatively few observations of very high SOC. Model improvements may also be possible if input covariate datasets and the final map are obtained at finer spatial resolution. For example, the NPCTR displays complex topography that may not be fully resolved at 90 m. Mishra and Riley (2015) found that soil wetness (derived from landform) and aspect were lost as significant predictors of Alaskan C stocks when moving from a 50 to a 100 m resolution. Similarly, Siewert (2017) compared a wide range of resolutions (2-1000 m) for random forest predictions of a sub-Arctic peatland SOC stocks in Sweden and found resolutions >30 m led to underestimates. Building models using more accurately georeferenced pedon data and more finely resolved (<50 m) covariate surfaces may improve spatial predictions of SOC. , smooth landscape-to-regional gradients, due to harmonized data compilation, gap-filling, and modeling approaches across the transboundary extent. Note the FAO GSOC map stock estimates are lower due to a shallower depth range (0-30 cm). Conclusions Regional SOC stock assessments can validate and improve global maps by considering drivers, and compiling datasets, in greater detail. We compiled a SOC database for the NPCTR, using pedology data to guide gap-filling and predictive modeling. Regression tree models predicted high SOC stocks in wet coastal watersheds, indicating that the CTR represents a moisture-dependent hotspot for SOC at midlatitudes. Contributions BB, DB, and AB conceived of the project. PS, CB, SS, and DD provided data for, and supervised GM during the compilation, and gap-filling of, the transboundary NPCTR SOC database. BB supervised GM on random forest and CART analyses. IG, SA, and PS created NPCTR digital surface products and provided unpublished data from digital soil mapping efforts by the Hakai Institute in 2014 and 2016. CB and SS provided BEC and BCSIS datasets and CB supervised GM in gap-filling of BEC pedon data. GM and BB authored the first draft of the manuscript and CB, DD, PS, IG, AB, and DB contributed to later drafts.
2019-05-20T13:06:19.880Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "90583a4a92fb7164c41ccb79ce7e9288df3ce09d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/aaed52", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2bc549ecc915453d6356ad9a81462f63d9e8c408", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
201308763
pes2o/s2orc
v3-fos-license
Design of half-rotating impeller tidal turbine and its energy-capturing characteristics Based on characteristics of the half-rotating mechanism, a new type of vertical axis turbine with characteristic of lift-drag combination was proposed, which named the half-rotating impeller tidal turbine(HRITT). The turbine was composed of rotary mechanism, alignment mechanism and support mechanism. The mechanism design and manufacture of the HRITT were completed and underwater operation experiments were carried out to verify the rationality of mechanism design and its good self-starting performance. Through the hydrodynamic simulation of the impeller by Xflow, the changes of pressure and velocity around the blades in the flow field were analyzed, which proved the feasibility of the operation and energy-capturing characteristics of lift-drag combination of the HRITT. Introduction Water resource is a kind of renewable and clean energy with huge reserves, whose development and utilization have attracted more and more attention [1].Turbines are power machines that convert the kinetic energy of flowing water into mechanical energy which are divided into two categories: horizontal-axis turbine and vertical-axis turbine [2]. In Seong Hwang investigated an new cycloidal water turbine and carried out parametric study by CFD analysis to find the optimal parameters[3]; Yang B designed the 'Hunt turbine' and carried out research through CFD method [4]; Fernandes A C studied a flat vertical axis turbine on improving efficiency [5]; Ye Li studied three-dimensional effects and arm effects of vertical axis tidal current turbine using a newly developed vortex method [6]; Le T Q analyzed the start-up process of vertical axis turbine with straight blade and helical blade under different loads through the Fluent software [7]; The bionic machinery team of Anhui university of technology proposed a special mechanism, named half-rotating mechanism [8], which was applied in the field of ship propeller and flapping-wing aircraft [9][10].A special flow field between the half-rotating mechanism and fluid was found, which had a good development prospect. A new type of half-rotating impeller tidal turbine was presented in this paper. The working principle of the HRITT was described and mechanism design and prototype manufacturing were completed. Underwater operation experiments were made to verify the rationality of mechanism design and running stability of the turbine. CFD method was used to study the flow velocity and pressure variation around the blades and to explore the energy capturing characteristics of the HRITT. Working principle of the HRITT The working principle of the HRITT was shown in figure 1. The rotating arm rotated freely around fixed point O and two blades were articulated at both ends of the rotating arm, who could rotate around articulated points A and B respectively. The rotating arm and blades were connected by transmission mechanism. When the rotating arm rotated at angular velocity ω, two blades would rotate in the same direction around point A and point B respectively at angular velocity ω/2. The extension line of blades at any position always intersected at the fixed point P. Rotating arm Blade w Figure 1. Working principle of the HRITT. Figure 2. Motion analysis of a single blade. Motion analysis of the HRITT with a single blade during a period was shown in figure 2. V0 and U0 meant the inflow velocity and blade's linear velocity respectively, and U1 meant the blade velocity relative to inflow velocity. During the operation of the device, the angle α(attack angle) between the blade and the relative velocity was larger when the blade moved in the area(downtidal area) where blade moved on the left side of the Y axis. The drag force acting on the blade by the incoming flow was much greater than lift force. At this time, the drag force was the main driving force F; On the contrary, when the blade moved in some regions on the left side of the Y axis (the countertidal area), the lift force was greater than drag force, then the lift force was the main driving force F. Therefore, the HRITT was a kind of vertical shaft turbine with characteristics of lift-drag combination. Mechanism design of the HRITT Mechanism of the HRITT was designed according to the working principle. The half-rotating motion between blades and arm was realized by using synchronous belt drive with transmission ratio of 2:1. A alignment device was designed to realize the alignment function of the HRITT. The preliminary design of the HRITT structure consisted of three parts: rotary mechanism, alignment mechanism and support mechanism, whose detailed structure was shown in Figure 3. Secondary output shaft Secondary transmission mechanism Upper alignment arm Design of rotary mechanism Rotary mechanism was the core structure of the HRITT, whose function was to capture and convert energy. The rotating mechanism mainly consisted of upper rotating arm, lower rotating arm, impeller , fixed column and primary transmission system. The upper and lower rotating arms were connected by fixed column to form a rotating frame like a shape of 'Gong', which was connected with the supporting mechanism by the alignment arm. Two blades were symmetrically mounted at both ends of the rotating arm and connected by the primary transmission system. The installation phase difference between two blades was 90 degrees. The primary transmission system consisted of large and small synchronous pulleys, tensioners and synchronous belt. The large synchronous pulley was fixed with blade and small synchronous pulley was fixed with the upper alignment arm. The tooth number ratio of the large and small synchronous pulley was 2:1. The small synchronous pulley stayed fixed and the rotating arms rotated around it while the large synchronous pulleys rotated with the blades. The primary output shaft, which was fixed on upper rotating arm, transmited energy to the secondary output shaft through the transmission system. Design of alignment mechanism The main function of alignment mechanism was to adjust rotary mechanism when the direction of incoming flow changed. It was mainly composed of upper alignment arm, lower alignment arm and secondary transmission mechanism. The upper alignment arm was used for the installation of fixed wheel in primary transmission system (small synchronous pulley) and secondary transmission system. The secondary transmission mechanism connected the primary output shaft and the secondary output shaft by synchronous belt transmission, which ensured the power could be transmitted to the secondary output shaft through the transmission system continually. When the direction of incoming flow changed, the force acting on the alignment arm produced a horizontal component force which was perpendicular to alignment arm, so the function of automatic alignment could realize. Design of supporting mechanism Supporting mechanism was mainly composed of body and a fixed base. The body was the main frame of the supporting mechanism, which was used to support, install and fix the whole HRITT. The rotary mechanism and alignment mechanism were installed on the support mechanism through two collinear fixed bases, which was connected through bearings, to realize the relative rotation between the rotary mechanism and the supporting mechanism. Prototype manufacture and underwater experiment of the HRITT According to the design scheme mentioned above, the HRITT was manufactured which was shown in figure 4. Aluminum alloy and stainless steel were selected as the processing materials to prevent rust from destroying the prototype. Main structural parameters of the HRITT were shown in Table 1. 0.2×0.6×0.003 Length of alignment arm 0.2 A simplified experimental scheme was adopted to carry out underwater experiments. Depending on the ship to drive the prototype forward, the action by incoming flow on prototype was simulated. A supporting float ball was fixed on both sides of the prototype, and a power generating device with a light bulb were installed with the secondary output shaft of the HRITT to facilitate the observation of the operation and energy output, which was shown in figure 5. Figure 4. Prototype of the HRITT. Figure 5. Underwater experiments of the HRITT. Through underwater experiments, it was observed that the prototype could start up and run smoothly at the inflow speed of about 0.4m/s with the light bulb luminous, which indicated that the HRITT could run and output energy continuously. This experiment verified the rationality of the HRITT and found that the prototype had good self-starting performance. Analysis of energy capturing characteristic of the HRITT The underwater operation experiment had validated the rationality of the design scheme. In order to further explore the energy capturing characteristic of the HRITT, a three-dimensional simplified model of the HRITT was established by UG and imported into Xflow, which was simulated to analyze the flow field changes and energy capturing characteristics near the blade. The flow field area's size was 8m×4m×4m which was shown in figure 6. In the initial state, the blade's longer side was parallel to the Y axis and the shorter one was parallel to the Z axis in the flow field. The phase difference between blade 2 and blade 1 was 90 degrees. The inflow direction was along negative direction of the X-axis and the velocity was 1 m/s. Under the action of incoming flow, the half-rotating impeller rotated counterclockwise around the Y-axis direction and the blade was constrained to do half-rotating motion by the constraint of moving equation. After setting the relevant parameters, the calculation was started and the simulation results were processed through the post-processing module. In one motion cycle, the blade motion law in the first half cycle was exactly identical with that in the second half cycle. Therefore, the energy capturing characteristics of the HRITT were analyzed by using the velocity vector and pressure nephogram of the fluid around the blade in the first half cycle ,which was shown in Table 2. The blade was in the state of flow-separation, and the drag of the blade generated by pressure difference was larger in the range of -80-160° which was shown in table 2. Positive work was done to the blade in the range of -80-40°and 100-160° and negative work was done in the range of 40-100°. The blade's front end was attached by eddy continuously, which induced additional lift to do positive work on the impeller in the range of 160-200°; In the range of 200-280°, the blade was in the state of flow-attachment and speed of the inner face was faster, which generated lift doing positive work. Therefore, the blades belonged to lift type in the range of 160-280° and drag type in the range of -80-40°and 100-160°. For the half-rotating impeller with double-blades, when one blade was in the low efficiency zone, the other blade was in the high efficiency zone, which ensured that the half-rotating impeller could run continuously. Therefore, the HRITT was a kind of vertical tidal turbine with unique energy capturing characteristic of lift-drag combination. Conclusion A new kind of half-rotating impeller tidal turbine was designed and physical prototype was manufactured; Good self-starting performance and smooth operation were verified by underwater experiment; Good hydrodynamic performance of the HRITT was analyzed by method of numerical simulation which found that the HRITT had the characteristic of lift-drag combination.
2019-08-23T14:13:38.487Z
2019-07-25T00:00:00.000
{ "year": 2019, "sha1": "3ceb721789fd2e236133e26d4b689647d9b3c28c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/295/3/032058", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "849d4942a0320df28aaecf22f37cc5761c77382a", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
8480834
pes2o/s2orc
v3-fos-license
An RNA aptamer that interferes with the DNA binding of the HSF transcription activator Heat shock factor (HSF) is a conserved and highly potent transcription activator. It is involved in a wide variety of important biological processes including the stress response and specific steps in normal development. Reagents that interfere with HSF function would be useful for both basic studies and practical applications. We selected an RNA aptamer that binds to HSF with high specificity. Deletion analysis defined the minimal binding motif of this aptamer to be two stems and one stem–loop joined by a three-way junction. This RNA aptamer interferes with normal interaction of HSF with its DNA element, which is a key regulatory step for HSF function. The DNA-binding domain plus a flanking linker region on the HSF (DL) is essential for the RNA binding. Additionally, this aptamer inhibits HSF-induced transcription in vitro in the complex milieu of a whole cell extract. In contrast to the previously characterized NF-κB aptamer, the HSF aptamer does not simply mimic DNA binding, but rather binds to HSF in a manner distinct from DNA binding to HSF. INTRODUCTION Heat shock factor (HSF) is a potent transcription activator that is highly conserved from yeast to humans. HSF plays a central role in activating gene expression in response to environmental stresses including heat shock, and regulates a wide range of downstream target genes in the genome (1). A genome-wide study showed that $3% of Saccharomyces cerevisiae genes are functional targets of HSF. Many are involved in a wide variety of important cellular functions such as signal transduction, energy generation, vesicular transport and chaperone function (2). HSF function is essential for the stress response, for viability in yeast (3) and for early development in Drosophila (4). HSF is also involved in the aging process in Caenorhabditis elegans (5), as well as in extra-embryonic development in mammals (6). In addition, downregulating HSF activity sensitizes cancer cells to some anti-cancer drugs (7). HSF, which functions during heat shock as a homo-trimer, has a highly conserved DNA-binding domain and trimerization domain, and a less conserved activation domain. Trimerized HSF binds tightly to a conserved heat shock element (HSE) that is composed of the basic unit, 'AGAAn', arranged as inverted repeats; e.g. a 15 bp sequence containing three such units, called HSE3 (AGAAGCTTCTAGAAG), is a good binding target for an HSF trimer (8). In between the DNA-binding domain and trimerization domain, there is a flexible linker region that is essential for positioning the DNA-binding domain in a HSF homotrimer (9). Upon heat shock or other stresses, the trimerization domain, which contains leucine zipper repeats become available for multimerization, and the resulting HSF trimers bind tightly to HSEs of heat shock genes (1). HSF activates transcription by further recruitment of other important transcription factors or complexes such as mediator complex to the heat shock promoters (10). A major goal of our laboratory is to identify specific reagents that can interfere with particular macromolecular interactions in order to dissect transcriptional mechanisms in vitro and in vivo (11,12). Heat shock genes provide an attractive model system for these studies. Because the HSF/ DNA interaction is a key regulatory step in heat shock gene activation, generating reagents that can specifically disrupt this interaction is critical. RNA aptamers are reagents that can be selected from a random RNA sequence pool for their ability to bind tightly to a protein target. Once isolated, such aptamers can be used to interfere with specific macromolecular interactions for evaluating mechanistic questions both by simply adding the aptamers to in vitro transcription systems or by expressing aptamer-encoding genes at high levels in cells and organisms (11,13). Only a few RNA aptamers have been selected against transcription factors that recognize specific DNA sequences. The best-characterized example is an NF-kB aptamer. This RNA aptamer has a structure that mimics the structure of normal DNA element binding to NF-kB, when the aptamer is bound to the protein (14). This example raises the possibility that transcription factors might have a common nucleic acid-binding surface for both endogenous and selected nucleic acid molecules (14). We characterized an HSF aptamer and show here that it can interfere with the normal interaction of HSF and DNA. However, this aptamer binds to HSF in a manner mechanistically distinct from that of DNA binding to HSF, demonstrating that such selected RNA aptamers can bind transcription factors by mechanisms that do not simply mimic the DNA element. The elaborate structural features of this HSF aptamer, namely a three-way junction structure might account for some of its surprising properties. Furthermore, the ability to mechanistically inhibit HSF function also makes this aptamer a molecular tool with potential significance in clinical applications where diseases are influenced by HSF activity. Proteins and SELEX Baculovirus expressed dHSF was purified as described elsewhere (15). MBP-fused dHSF and His-tagged full-length yHSF were expressed in Escherichia coli and purified with conventional affinity column chromatography. Partial yHSF proteins and point mutation yHSFs were expressed and purified using previously described protocols (9). The linker peptide (underlined) and extra residues for dimerization (WQFENENFIRGREDLLEKIIRQKGSSNACLIN) was synthesized on a continuous flow PerSeptive Biosystems (Framingham, MA) peptide synthesizer and purified to homogeneity by reversed-phase C 18 -high-performance liquid chromatography. The selection of RA1-HSF aptamer was performed using MBP-fused dHSF and the SELEX method based on nitrocellulose filter partitioning with final selection by electrophoretic mobility shift assay (EMSA) (11). We preformed 14 cycles of selection and selected 5 identical sequences (named as 'RA1-HSF'), from a total of 20 sequences cloned from the final stage pool. The remaining 15 sequences showed no detectable HSF-binding activity. EMSA The general scheme of EMSA was adopted and modified from previous work (16). RNA probes were internally labeled with [a-32 P]UTP by using a T7 in vitro transcription kit (MAXIscript Kit; Ambion, Austin, TX). DNA is end-labeled with [a-32 P]ATP with T4 polynucleotide kinase. The 10 ml binding solution contains 1· binding buffer (10 mM Tris, 40 mM KOAc and 1 mM MgCl 2 , pH 7.6), 1 mg carrier yeast RNA, 4 mg carrier BSA, 5 mM DTT, 10% glycerol, 6 U of Superase-In (Ambion), plus protein and labeled RNA. The concentration of the labeled RNA probe was below 1 nM in most experiments to ensure an excess protein concentration. Protein and RNA were incubated at room temperature for 30 min, and 10 min at 4 C before loading on a 6 or 9% native polyacrylamide gel or a 2% agarose gel. The polyacrylamide gels contained 1/4 TBE buffer and 1 mM MgCl 2 , and the agarose gels contained 1· TAE buffer. Gels were run at 100-150 V at 4 C for 1-2 h. They were then dried and exposed over a phosphorimager plate, and scanned after 4 h overnight exposure using a STORM image scanner. Competition assay Competition assays were performed in the same binding solution as described for EMSA. For DNA and RNA competition assays, RNA aptamer probes were labeled with [a-32 P]UTP as described above. The HSE3 DNAs and the HSE3 RNA were end-labeled with [g-32 P]ATP with T4 polynucleotide kinase. An excess of a particular ice-cold DNA or RNA was co-incubated with the labeled RNA or DNA and HSF protein for 30 min at room temperature, and examined by EMSA. In the protein-protein competition assay ( Figure 4B), the RA1-HSF aptamer was labeled as above, and different amounts of each protein construct were incubated together with the RNA for 30 min at room temperature to allow competition for the binding to the RNA. Samples were submitted to gel electrophoresis and exposed as described above for EMSA. Double-strand annealing experiment Annealing of the two RNA strands was performed by incubating the labeled RNA strand A and unlabeled RNA strand B at 70 C in 1· binding buffer for 10 min, and the temperature was reduced to room temperature gradually. Both the annealed RNA mixture and labeled single strand of RNA alone were incubated with 40 nM dHSF at room temperature for 30 min before loading on to an agarose gel for the EMS assay. In vitro transcription assay Yeast Strain BJ1991 (prb1 pep4 gal2 leu2 trp1 ura3) was grown in yeast extract/peptone/dextrose (YEPD) to an OD 600 of 2.0. Cells were harvested, and whole cell extracts were prepared by using a mortar and pestle as described previously (17). Protein concentration was determined by Bradford assay. In vitro transcription was performed based on a protocol adapted from Ref. (12). Briefly, transcription reactions were carried out at room temperature in a 25 ml final lume using a plasmid template pJJ461 (200 ng) that contains an upstream HSE (CTTCTAGAAGCTTCTAGAAG) and the yeast CYC1 promoter fused to a 290 nt G-less cassette. Yeast whole cell extract (120 mg) was incubated for 2 min in transcription buffer [20 mM HEPES, pH 7.6, 100 mM potassium glutamate, 10 mM MgOAc, 5 mM EGTA, 2.5 mM DTT, 10 mM ZnSO 4 , 10% glycerol, 20 U of RNase Inhibitor (SUPERase-In; Ambion) plus an ATP regeneration system (3 mM ATP, 30 mM creatine phosphate and 150 ng of creatine kinase)]. Aptamers and recombinant proteins were added to the extract mixture at the concentrations indicated, together with the addition of DNA template. Transcription was initiated with NTPs (10 mCi of [a-32 P]UTP, 50 mM UTP, 250 mM CTP and ATP, final concentrations) and terminated with stop solution (10 mM Tris, 20 mM EDTA, 0.2 M NaCl, 1 mg of glycogen and 25 U of RNase T1, pH 7.6). The samples were incubated at 37 C for 30 min, digested with proteinase K in the presence of SDS (2%) for 20 min before being phenol/chloroform extracted and ethanol precipitated. RNA products were separated on a 6% polyacrylamide sequencing gel. RESULTS Defining the critical sequences of the RA1-HSF aptamer required for HSF binding An RNA aptamer against HSF was selected from a pool of 10 14 RNA molecules that can bind to bacterially expressed dHSF. The mfold program (18) predicted that the most stable RNA secondary structure is composed of a three-way junction radiating three different stem-loops, which we defined as stem-loops 1, 2 and 3 as shown in Figure 1A. The predicted stem-loop 1 is essential for the aptamer function and could not be shortened (data not shown). We defined the minimal functional motif by trimming both stem-loops 2 and 3 in a distal-to-central manner and testing the resulting RNAs for HSF-binding activity by EMSA. A 45 nt sequence was finally defined as the minimal structure that still carries detectable binding activity: 1 bp less from either stem-loop 2 or 3 resulted in sharp decrease of binding activity ( Figure 1B and C). This minimal aptamer structure, which we refer to as the CORE ( Figure 1D), is relatively large and complex compared to most other minimal functional motifs from other identified RNA aptamers, indicating that its interaction with HSF is likely to be extensive. The apparent binding K d for full-length aptamer binding to the full-length dHSF is 20-40 nM and 40-80 nM for the CORE aptamer. However, we cannot rule out that certain bases in the middle of the CORE sequence may be deleted or replaced without compromising aptamer-binding activity. Confirming the three-way junction structure of the aptamer by a double-strand annealing experiment To test whether the three-way junction structure that was predicted by mfold is the active conformation, we sought to assemble this structure by an independent method, where extra base pairing on stem-loops 2 and 3 ensures the formation of the three-way junction. We designed a doublestranded RNA annealing experiment where the CORE aptamer sequence was divided between two complementary RNA molecules (Figure 2A and B). We tested whether annealing of these two RNA molecules could reconstruct the aptamer-binding activity. If the real secondary structure was different from the predicted structure, this reconstruction of activity would very likely fail. The additional base pairs lock in two of the predicted stems and minimize the potential for additional structures involving bases in the third stemloop. The annealing of these two RNAs produced strong HSF-binding activity as tested by EMSA, whereas individual RNAs alone had no activity ( Figure 2C). These results provided an independent test of the three-way junction nature of the aptamer structure. Furthermore, this experiment suggested another level for modulating the activity of an aptamer by 'heterodimer design'. The fact that the function of an aptamer depends on the presence of two separate RNA molecules provides a strategy for tightly controlling its activity. RA1-HSF binds specifically to the HSF protein Because numerous nucleic acid-binding proteins exist in a cell, it is important to show the HSF aptamer binds with specificity to its proposed target, HSF. First, we examined the specificity of this aptamer by testing the interaction of this aptamer to several other transcription factors (TBP, GAGA factor, Gal4-VP16) that bind to DNA. None of them showed any binding activity to the aptamer even at a protein concentration of 250 nM ( Figure 3A). Second, the specificity of the aptamer/HSF interaction was tested in the background of whole, insect-cell lysate proteins. Insect culture cells (SF9 cells) that do or do not express Drosophila HSF protein were lysed, followed quickly by EMSA using 32 P-labeled RA1-HSF. The aptamer RNA could form a single RNA-protein complex band only with the cell lysate that expresses dHSF, which indicated that this aptamer specifically recognizes dHSF at least in the background of whole insect cell lysates ( Figure 3B). Strand A+ Strand B Strand A Strand B Interestingly, a double-stranded region on the stem-loop 3 of the aptamer has the sequence 'AGAAU', which corresponds to the 5 bp repeating unit of the HSE DNA sequence ( Figure 1A). However, a dsRNA containing the sequence resembling HSE3 dsDNA failed to compete with the labeled aptamer for binding to HSF ( Figure 3C). This result ruled out the possibility that this aptamer binds to HSF simply through the part of double-stranded RNA that carries the corresponding sequence of HSE DNA, though this is not out of expectation given the difference of helix structure between DNA and RNA (19). This aptamer binds both bacterially expressed dHSF and insect-expressed dHSF (Baculovirus expression system) with almost the same affinity. This implies that the binding is not influenced significantly by the post-translational modification of HSF (data not shown). Interestingly this aptamer also binds to yeast HSF1 protein with affinity similar to Drosophila HSF. Therefore, we used a yHSF deletion series to define the minimal region on HSF for aptamer binding in the following experiments. The DNA-binding domain plus its flanking linker region (DL) of the HSF protein is essential for binding aptamer RNA Given that the aptamer RNA and HSE3 DNA are competitive in their binding to HSF, we anticipated that the RNA was likely to bind to the DNA-binding surface of the HSF protein, perhaps by structurally simulating HSE DNA. However, we observed that the interactions of HSF with the RNA aptamer and with HSE3 DNA show important differences. A previous study has shown that DNA-binding domain alone is sufficient for HSE3 DNA binding (9). Surprisingly, the DNA-binding domain alone is not sufficient for the RNA binding even at a protein concentration as high as 5 mM (data not shown). In order to define the region required for RNA binding, we used a deletion series of yHSF protein, starting with a peptide that contained the DNA-binding domain, the conserved 21 amino acid linker, the non-conserved 52 amino acid linker and the trimerization domain. This construct, which we refer to as DLT, binds to the aptamer at approximately the same affinity as the full-length protein. The non-conserved 52 amino acid linker was not required for RNA binding (compare lanes A and B in Figure 4B), and this part of Hsf1 is known not to be essential for structural integrity or in vivo function (9). However, the 21 amino acid conserved linker was absolutely required for RNA binding (compare lanes B, C, D and E in Figure 4B). Surprisingly, the trimerization domain was not required, as long as the peptide containing the DNA-binding domain and linker was dimerized through an added cysteine near the C-terminus (lane F in Figure 4B). Because EMSA could fail to detect weak interactions, we also performed a competition experiment. We used the different HSF domains to compete with the RNA binding to the DLT construct. The competition results show that the DL, but not the DNA-binding domain, monomeric or dimeric linker peptide (Lm, Ld), competes with the DLT for binding aptamer RNA ( Figure 4D). This indicates that the DL peptide contains the minimal set of domains required for the aptamer binding. The RNA aptamer binds to HSF in a manner distinct from HSE DNA binding to HSF. Though the DNA-binding domain alone is sufficient for HSE3 DNA binding, some mutations within the conserved protein linker region can dramatically decrease the DNA-binding activity presumably by changing the positional relationships of the DNA-binding domains in the trimer HSF (20). To test whether the linker requirements for the HSF binding to RNA are similar to those for HSF binding to DNA, we used six different point mutation versions of the DLT construct that varied at five conserved residues within the 21 amino acid conserved linker region ( Figure 5). EMSA results showed no correlation between DNA binding and RNA binding to the proteins containing these point mutations. Some mutations diminished the DNA binding but not RNA binding, whereas some mutations diminished RNA binding but not the DNA binding ( Figure 5). We conclude that the binding pattern of the RNA aptamer to HSF is distinct from that of DNA binding to HSF. The results also further confirm that the conserved linker region is critical for the HSF interaction with the RNA aptamer. The HSF RNA aptamer inhibits HS transcription in yeast cell extracts, and this inhibition activity is reversed by the addition of DL The result that this aptamer could compete with DNA binding to HSF indicates that this aptamer could downregulate HSF transcriptional activity. We tested the effects of this aptamer on heat shock (HS) genes by using a yeast cell extract in vitro transcription system. RA1-HSF RNA was added into the yeast whole cell extract, which contains necessary components for HS transcription and a reporter yeast gene whose promoter has an HSE3 element (Materials and Methods). This yeast transcription system has been described and applied successfully to determine inhibitory effects of other aptamers against other transcription factors previously (12). The results in Figure 6 show that the RA1-HSF RNA aptamer inhibits the transcription on HS promoter at a concentration as low as 10 nM. Moreover, adding purified recombinant DL protein reversed this transcription inhibition ( Figure 6). This result not only confirms the inhibitory activity of this aptamer on HS genes at least in a yeast cell extract transcription system, but also demonstrates that the inhibitory activity of this aptamer is specifically through the HSF interaction with DNA, since the presence of extra DL could reverse the inhibitory effects completely. In contrast, adding DL alone caused insignificant change to the overall transcription, which ruled out the possibility that DL reversed the inhibition by stimulating transcription through an independent activation pathway. By using DL instead of full-length yHSF, we avoided the possibility that the additional recombinant yHSF may squelch transcription by binding other proteins that interact with other domains of yHSF. Thus, the RNA aptamer appears to inhibit transcription through a specific interaction with the DL domain of HSF and, moreover, these results demonstrated the potential utility of aptamers in dissecting of transcriptional mechanisms. DISCUSSION The HSF aptamer we have selected and characterized here has an unusually complicated secondary structure. The predicted secondary structure has three stem-loops connected by a three-way junction. Serial deletions of the aptamer defined a minimized aptamer-binding motif. This secondary structure has been further confirmed by independently assembling a homologous three-way junction using two separate RNAs, whose annealing produced full HSF-binding activity. Because complicated RNA structures with one or more branches, account for only <1% of the secondary structures in a 40mer random sequence pool as used here (21), the functional domain of a selected RNA aptamer is often a single stem-loop structure. Why did we not select simpler HSF aptamers? Perhaps the starting pool does not contain a simple structured RNA that binds tightly to HSF. Also, a more complicated RNA structure may be favored in the selection of an RNA that binds to the complicated and flexible structure of HSF protein. For example, the linker region of the HSF, which is essential for the aptamer binding, is a highly flexible unstructured region (9). Most previously characterized RNA aptamers to DNAbinding proteins were found to bind to the DNA-binding surface of the targeted protein (12,22). In a structural study of an NF-kB/aptamer complex, Huang et al. (14) found that the NF-kB RNA aptamer is a DNA mimic, and matches perfectly the DNA-binding surface of NF-kB. In contrast to the NF-kB example, our HSF aptamer provides the first example of an RNA aptamer selected to a DNA-binding factor that can compete with DNA but binds to the protein in a manner that is distinct from DNA binding to the protein. Moreover, even though the linker region of HSF has been proven to be essential for the aptamer binding, this does not rule out the possibility that the DNA-binding domain can provide a direct contribution to the binding. We have previously generated RNA aptamers as inhibitors of particular macromolecular interactions of the general transcription factor TATA-binding protein (TBP) (12). Here we have generated and characterized an RNA aptamer that is a highly effective inhibitor of a key upstream transcription activating factor. The fact that this RA1-HSF aptamer can inhibit HSF-induced transcription in vitro in the complex milieu of a whole cell extract demonstrates the potential usefulness of this aptamer for both in vitro and in vivo studies of HSF function. To our knowledge, there is no drug that targets this DNA-binding function of HSF. We envision that the information derived from this and future studies with this aptamer will prove useful in the diagnosis and treatment of diseases that are influenced by HSF function.
2014-10-01T00:00:00.000Z
2006-08-07T00:00:00.000
{ "year": 2006, "sha1": "27fd4265ee03d4b17ed76443185482eca9175fdc", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/34/13/3755/3974344/gkl470.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1372c9e844e24c4626ebfb103da6bae9668633bf", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
256701907
pes2o/s2orc
v3-fos-license
Planthopper salivary sheath protein LsSP1 contributes to manipulation of rice plant defenses Salivary elicitors secreted by herbivorous insects can be perceived by host plants to trigger plant immunity. However, how insects secrete other salivary components to subsequently attenuate the elicitor-induced plant immunity remains poorly understood. Here, we study the small brown planthopper, Laodelphax striatellus salivary sheath protein LsSP1. Using Y2H, BiFC and LUC assays, we show that LsSP1 is secreted into host plants and binds to salivary sheath via mucin-like protein (LsMLP). Rice plants pre-infested with dsLsSP1-treated L. striatellus are less attractive to L. striatellus nymphs than those pre-infected with dsGFP-treated controls. Transgenic rice plants with LsSP1 overexpression rescue the insect feeding defects caused by a deficiency of LsSP1 secretion, consistent with the potential role of LsSP1 in manipulating plant defenses. Our results illustrate the importance of salivary sheath proteins in mediating the interactions between plants and herbivorous insects. In nature, plants are continuously challenged by various pathogens, including bacteria, fungi, and nematodes. To survive or fend off attacks, plants have evolved multi-layered immune systems from recognizing pathogens to activating defense responses. Pattern recognition receptors can perceive "non-self" molecules and activate the pattern-triggered immunity (PTI), including mitogen-activated protein kinase (MAPK) cascades, reactive oxygen species (ROS), and hormone signaling [1][2][3] . To counteract plant immunity, plant pathogens deliver secretory effectors to target the immune signaling components of PTI and interfere with their activities 4,5 . However, some effectors can be sensed by the plants with time, further initiating the effectortriggered immunity 6 . Over millions of years of co-evolution, plant pathogens have developed dynamic and complex interactions with host plants. Piercing-sucking insects, such as planthoppers, aphids, and whiteflies, are important pests that damage host plants by feeding or transmitting viruses. During the feeding process, two types of saliva (gel saliva and watery saliva) are ejected into plant tissues 7 . These oral secretions, on the one hand, hinder insect performance by activating plant defenses. For example, salivary protein Cathepsin B3 from Myzus persicae can be recognized by Nicotiana tabacum plants, which thus suppress aphid feeding by triggering ROS accumulation 8 . Moreover, salivary protein 1 from Nilaparvata lugens induces cell death, H 2 O 2 accumulation, defense-related gene expression, and callose deposition when it is transiently expressed in Nicotiana benthamiana leaves or rice protoplasts 9 . On the other hand, saliva exerts multiple roles in improving insect performance, such as calcium binding proteins for calcium regulation 10 , DNase II for extracellular DNA degradation 11 , and Helicoverpa armigera R-like protein 1 (HARP1) for plant hormonal manipulation 12 . Piercing-sucking herbivores eject abundant salivary effectors into plant tissues. There may be some salivary elicitors triggering plant defenses, while the elicitor-induced plant defenses are inhibited by other salivary components. However, little is known about the complex interactions in saliva. Formed by gel saliva, salivary sheath is indispensable for insect feeding. It is secreted during the stylet probing, which provides mechanical stability and lubrication for stylet movement 13 . The salivary sheath is capable of sealing the stylet penetration site, thereby preventing the plant immunity triggered by leaked cell components 7 . In aphids and planthoppers, the disrupted salivary sheath formation can hinder insect feeding from plant sieve tubes, but not from the artificial diet 14,15 . After secretion, salivary sheath is distributed in plant apoplast and directly contacts with plant cells 15 . Salivary sheath is composed by many salivary sheath proteins, which can be potentially recognized as herbivore-associated molecular patterns (HAMPs) that activate the immune response in host plants 16 . Because of the forward roles in herbivore-plant interactions, a few proteins in the salivary sheath may exhibit a high evolutionary rate 17 . Nevertheless, the current knowledge on salivary sheath is mainly limited on its mechanical function. Therefore, it is interesting to reveal its other functions in herbivore-plant interactions. Plant apoplast space is an important battleground between the host and pathogens 18 . The papain-like cysteine proteases (PLCPs), which share a conserved protease domain, are prominent enzymes in the plant apoplast that can function as the central hubs in plant immunity 19 . As a well-known maize insect resistance gene, Mir1 belongs to PLCPs 20 . It can be rapidly accumulated at the wound sites, and can degrade the insect gut surface to confer maize resistance against caterpillars 20,21 . Mir1 accumulation is reported to enhance plant resistance against root-feeding herbivores and corn leaf aphids 22,23 . In turn, PLCPs are the common targets of pathogen effectors. Fungi, oomycete, nematodes, and bacteria can actively interfere with the activity or subcellular location of plant PLCPs, which can thereby suppress plant immunity [24][25][26][27][28] . The small brown planthopper, Laodelphax striatellus, is a destructive pest that causes severe yield reductions and economic losses in rice crops. Similar to most phloem-feeding insects, planthoppers can secrete a mixture of saliva during feeding. Several salivary proteins have been found to participate in salivary sheath formation and/or interfere with the host immune responses 13,29 . Nevertheless, the functions of most salivary proteins remain unknown. In this study, L. striatellus salivary sheath protein LsSP1 is employed as a molecular probe to investigate the mechanism by which this planthopper can interact with salivary sheath mucin-like protein (LsMLP)triggered, PLCP-mediated plant defenses. The salivary LsSP1 is secreted into host plants during feeding and is shown to interact with multiple PLCPs belonging to different subfamilies in yeast two hybrid (Y2H) and bimolecular fluorescence complementation (BiFC) assays. OsOryzain is a member of PLCPs. Expression of LsSP1 in N. benthamiana plants significantly attenuates the H 2 O 2 accumulation and defense gene expression induced by OsOryzain and LsMLP, while in rice plants the role of OsOryzain is not confirmed. Overexpression of LsSP1 in rice plants rescues the feeding defects caused by a deficiency in LsSP1 secretion. Results LsSP1 is important for L. striatellus feeding on rice plants Many genes that highly expressed in L. striatellus salivary glands were planthopper-specific 30 , and their homologous genes were not found in other species (Supplementary Data 1). To reveal their specific roles in the planthopper-rice interactions, this study firstly investigated the expression patterns of these genes in different tissues. In total, 30 genes were found to be specifically expressed in salivary glands ( Supplementary Fig. 1). The L. striatellus salivary protein 1 (hereafter: LsSP1, accession number: ON322955) was among the top 5 most abundant, salivary gland-specific, and planthopper-specific genes, which was therefore selected for further analysis. The insect survivorship was not significantly affected by treating L. striatellus with dsLsSP1 (log-rank test, p = 0.3044; Fig. 1a). However, the dsLsSP1-treated L. striatellus produced less offspring (one-way ANOVA test followed by Tukey's multiple comparisons test, p = 0.0153; Fig. 1b) and excreted less honeydew (one-way ANOVA test followed by Tukey's multiple comparisons test, p = 0.0127; Fig. 1c) than the dsGFP-treated control. Electrical penetration graph (EPG) was adopted for monitoring the insect feeding behavior. Compared with dsGFP treatment, L. striatellus treated with dsLsSP1 exhibited a significant decrease (by 62%; two-tailed unpaired Student's t test, p = 0.0057) in phloem sap ingestion, along with the slight increases in nonpenetration (by 23%; two-tailed unpaired Student's t test, p = 0.2769) and pathway duration phase (by 31%; two-tailed unpaired Student's t test, p = 0.2525) (Fig. 1d, e). These results indicate that LsSP1 plays a role in L. striatellus feeding on rice plants. LsSP1 is a salivary sheath protein not essential for salivary sheath formation LsSP1 contained an open reading frame of 771 bp, encoding a protein of 256 amino acids. No conserved domain was found in LsSP1. The protein possessed an N-terminal signal peptide, with no transmembrane domain, which indicated its secretory property (Supplementary Fig. 2a). Homologous analysis demonstrated that LsSP1 was a planthopper-specific protein, and exhibited 43.9% and 58.5% amino acid sequence identities to secretory proteins in the brown planthopper N. lugens (ASL05017) and the white-backed planthopper Sogatella furcifera (ON322954), respectively (Supplementary Fig. 2b). LsSP1 and its homologous genes in other planthopper species have not been well investigated previously. Spatialtemporal expression analysis showed that LsSP1 was mainly expressed at the nymph and adult stages ( Supplementary Fig. 2c), and immunohistochemical (IHC) staining showed that LsSP1 was exclusively expressed in a pair of follicles in primary salivary glands (Fig. 2a, b). The transcript level of LsSP1 was reduced by 90% after the treatment of L. striatellus with dsLsSP1, and almost no LsSP1 signal was detected in salivary glands ( Supplementary Fig. 3a-c). LsSP1 was secreted during insect feeding, and a band of approximately 35 kDa was detected in rice plants infested by L. striatellus, but not in the non-infested plants (Fig. 2c). For most of the piercing-sucking insects, two types of saliva (gel and watery saliva) are ejected into plant tissues during the feeding process. Previously, the components of L. striatellus watery saliva collected by artificial diet were reported 30 . However, LsSP1 was not detected in those samples. Thereafter, the salivary sheath (gel saliva) was collected from the inner layer of the Parafilm membrane to investigate whether LsSP1 existed in the salivary sheath. As a result, a band of LsSP1 was detected in the salivary sheath sample (Fig. 2c). By contrast, the band of LsSP1 in watery saliva sample was not visible, indicating that LsSP1 was a salivary sheath protein. Immunohistochemistry (IHC) staining analysis of salivary sheath on the Parafilm membrane and in rice plants further confirmed the presence of LsSP1 in salivary sheath (Fig. 2d, e), whereas almost no signal was detected in salivary sheath secreted from dsLsSP1-treated L. striatellus (Supplementary Fig. 3d, e). LsSP1 deficiency did not influence salivary sheath formation, and there was no significant difference in salivary sheath appearance between dsLsSP1 treatment and the control as observed under scanning electron microscopy (SEM; Supplementary Fig. 4). Also, we did not find significant difference in the length of salivary sheath on the Parafilm membrane (two-tailed unpaired Student's t test, p = 0.5926; measured from the top to base of salivary sheath under SEM) or the number of salivary sheaths left on the rice surface (twotailed unpaired Student's t test, p = 0.7615; measured by counting the ring-shaped salivary sheath structure under SEM) after dsLsSP1 treatment ( Supplementary Fig. 4). These results suggest that LsSP1 is a salivary sheath protein, but that it is not indispensable for salivary sheath formation, which is significantly different from two previously reported salivary sheath proteins 14,31 . LsSP1 binds to the salivary sheath protein mucin-like protein LsMLP using Y2H, BiFC and LUC assays Our previous work demonstrated that mucin-like protein (MLP) was the main component of salivary sheath in the planthopper N. lugens 31 . Amino acid alignment demonstrated that MLPs among three planthoppers were highly homologous ( Supplementary Fig. 5a). At first, the function of L. striatellus MLP (LsMLP, accession number: ON568348) was investigated by RNAi ( Supplementary Fig. 5b). The LsMLP-deficient L. striatellus only secreted the short salivary sheath (two-tailed unpaired Student's t test, p < 0.001; Supplementary Fig. 6), similar to that of NlMLP-deficient N. lugens 16 . The number of salivary sheaths left on the rice plant significantly decreased when L. striatellus was treated with dsLsMLP (two-tailed unpaired Student's t test, p = 0.015; Supplementary Fig. 6). Furthermore, the LsMLP-deficient L. striatellus exhibited a high mortality rate (log-rank test, p < 0.001; Supplementary Fig. 5c), indicating that LsMLP is important for L. striatellus performance. Meanwhile, the treatment of L. striatellus with dsLsMLP did not influence LsSP1 at the transcript (two-tailed unpaired Student's t test, p = 0.5317; Fig. 3a) or the protein level (Fig. 3b). However, almost no fluorescence signal of LsSP1 was detected in salivary sheath secreted from dsLsMLP-treated L. striatellus, which was significantly different from that of dsGFP-treated control (Fig. 3c). Thereafter, this study detected whether LsSP1 existed in watery saliva or salivary sheath secreted from dsLsMLP-treated L. striatellus. Interestingly, more LsSP1 was found in watery saliva than in salivary sheath collected from dsLsMLP-treated L. striatellus, , and honeydew excretion (c). The untreated (CK) and dsGFP-treated L. striatellus were used as controls. Data in a are presented as mean values ±95% confidence intervals (displayed in light shades). Different lowercase letters indicate statistically significant differences at P < 0.05 level according to log-rank test (a) or one-way ANOVA test followed by Tukey's multiple comparisons test (b, c). d Comparison of electrical penetration graph (EPG) parameters between dsGFP-treated and dsLsSP1-treated L. striatellus. All EPG recordings were performed for 8 h. P-values were determined by two-tailed unpaired Student's t test. **P < 0.01; ns, not significant. Data in b-d are presented as mean values ±SEM. For survival analysis in a, n = 91, n = 88, and n = 106 individuals in CK, dsGFP, and dsLsSP1, respectively; for fecundity analysis in b, n = 20 independent biological replicates in three treatments; for honeydew analysis in c, n = 16, n = 25, and n = 23 independent biological replicates in CK, dsGFP, and dsLsSP1, respectively; for EPG analysis in d, n = 11 independent biological replicates in three treatments. e Overall typical EPG waveforms over 1 h for dsGFP-treated (upper) and dsLsSP1-treated L. striatellus (lower). The insect feeding behavior was classified into nonpenetration (np), pathway duration (N1 + N2 + N3), phloem sap ingestion (N4), and xylem sap ingestion (N5) phases. The rice variety cv. ASD7 was used. Source data are provided as a Source Data file. The potential interaction between LsSP1 and LsMLP was investigated using point-to-point Y2H assays. The yeast transformants expressing DNA-binding domain (BD)-LsMLP and activating domain (AD)-LsSP1 were found to grow on the quadruple dropout medium, which was not observed in transformants bearing the control constructs (Fig. 3e). Similar results were found in yeast transformants expressing BD-LsSP1 and AD-LsMLP (Fig. 3e). Also, the interaction between LsSP1 and LsMLP was verified by BiFC assay (Fig. 3f) and luciferase complementation (LUC) assay (Fig. 3g, h). These results may suggest that LsSP1 interacts with LsMLP in vivo. LsSP1 can interact with rice papain-like cysteine proteases using Y2H, GST-pull down, BiFC, and LUC assays To understand the potential roles of LsSP1 in insect-plant interaction, Y2H screening was performed using a rice cDNA library. Seven proteins were found to potentially interact with LsSP1, including an Oryza sativa Oryzain (OsOryzain, NP_001389372.1, LOC_Os04g55650) (Supplementary Table 1). OsOryzain was highly homologous with Arabidopsis RD21, tomato C14, and maize Mir3 cysteine proteases. It contained a predicted N-terminal secretion signal and a self-inhibitory prodomain followed by the peptidase and granulin domains (Supplementary Fig. 7a). OsOryzain is a member of PLCPs that act as a central hub in plant immunity and are required for the full resistance of plants to various pathogens 19 . In tomato, C14 is converted into immature (iC14) and mature (mC14) isoforms that are accumulated into various subcellular compartments and the apoplast 28 found to be significantly induced upon L. striatellus infestation (Supplementary Fig. 10). The expression of OsOryzain was induced at 3 h post-infestation, and reached a peak at 6 h ( Supplementary Fig. 11a). Salicylic acid (SA) exerts a critical role in plant defense against sapsucking herbivores [32][33][34] . The induction of SA biosynthetic genes and SA responsive genes was detected upon L. striatellus infestation (Supplementary Fig. 12). To investigate the possible role of SA in regulating PLCPs, the relative transcript levels of PLCPs were quantified after SA treatment. As a result, SA significantly induced the expression of 7 PLCPs, including OsOryzain (Supplementary Figs. 10 and 11a). These results indicate that numerous PLCPs might be associated with the L. striatellus-induced SA-mediated plant defenses in rice plants. In addition, our experiments also investigated the protein levels of OsOryzain in response to SA treatment and L. striatellus infestation. The results demonstrated that SA treatment and L. striatellus infestation induced the expression of OsOryzain in plant cells, while rice plants infested by L. striatellus secreted a lower amount of mature OsOryzain (mOsOryzain) into apoplast than that under SA treatment (Supplementary Note 2 and Supplementary Fig. 11b, c). We were not able to confirm that OsOryzain is involved in the plant defense response to L. striatellus, hence additional methods and results LsSP1 affects plant defenses in rice plants To determine whether LsSP1 affects plant defenses in rice plants, the feeding preference of L. striatellus nymphs on plants pre-infested by dsGFP-and dsLsSP1-treated L. striatellus was compared. The results revealed that rice plants pre-infested with dsLsSP1-treated L. striatellus were less attractive to L. striatellus nymphs than those pre-infected with dsGFP-treated controls (Fig. 4a), suggesting that dsLsSP1-treated L. striatellus might elicit plant defenses and become less palatable to conspecifics. Thereafter, plants infested by dsGFP-treated L. striatellus and dsLsSP1-treated L. striatellus were subject to transcriptomic sequencing. In total, 405 differentially expressed genes (DEGs) were identified, among which 90.9% were up-regulated in dsLsSP1-treated L. striatellus infested plants ( Supplementary Fig. 16 and Supplementary Data 2). Enrichment analysis demonstrated that the majority of DEGs were involved in plant-pathogen interaction, environmental adaptation, transporters, plant hormone signal transduction, and terpenoids metabolism (Fig. 4b). Among the 28 SA biosynthetic or SA responsive independent biological replicates). P-values were determined by two-tailed unpaired Student's t test. **P < 0.01; ***P < 0.001; ns, not significant. b Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis of differentially expressed genes (DEGs). Enriched P-values were calculated according to one-sided hypergeometric test using TBtools software 70 . c Upregulation of salicylic acid (SA)related genes in dsLsSP1-treated L. striatellus infested plants compared with dsGFPtreated L. striatellus infested ones. d H 2 O 2 levels in the untreated rice plants and rice plants infested by dsRNA-treated L. striatellus. Different lowercase letters indicate statistically significant differences at P < 0.05 level according to one-way ANOVA test followed by Tukey's multiple comparisons test. e Upregulation of defense genes in dsLsSP1-treated L. striatellus infested plants compared with dsGFP-treated L. striatellus infested ones. P-values were determined by two-tailed unpaired Student's t test. *P < 0.05; **P < 0.01; ***P < 0.001; ns, not significant. PAD4 phytoalexin deficient 4, SAMT SA methyl transferase, SAGT SA glucosyl transferase, WRKY transcription factor WRKY, PR1 pathogenesis-related 1. Data in d and e are presented as mean values ±SEM (n = 3 biological replicates). The rice variety cv. ASD7 was used. Source data are provided as a Source Data file. Table 2), 8 were found to be differentially expressed. These DEGs were all up-regulated (Fig. 4c), indicating the activation of SA pathway in dsLsSP1-treated L. striatellus infested plants compared with dsGFP-treated L. striatellus infested ones. H 2 O 2 accumulation has been applied as a marker for plant basal defenses against sap-sucking herbivores 29,35 . In this study, H 2 O 2 levels in rice plants were significantly higher at 24 h after dsLsSP1-treated L. striatellus infestation than those after dsGFP-treated L. striatellus infestation (one-way ANOVA test followed by Tukey's multiple comparisons test, p = 0.0298; Fig. 4d). Quantitative real-time PCR (qRT-PCR) analysis further confirmed the upregulation of defense genes (Fig. 4e), and the obtained results were consistent with transcriptomic data. Collectively, these results demonstrate that a deficiency in LsSP1 secretion activates plant defenses as a response to L. striatellus infestation. genes (Supplementary Overexpressing LsSP1 in rice plants benefits dsLsSP1-treated L. striatellus feeding Transgenic Nipponbare rice plants with constitutive LsSP1 overexpression were constructed ( Supplementary Fig. 17). The wild-type (WT) Nipponbare plant was used as a control. Two independent homozygous lines were used, and similar results were obtained. The results of comparison group 1 (WT and oeSP1#1) and comparison group 2 (WT and oeSP1#2) are presented in Fig. 5 and Supplementary Fig. 18, respectively. The resistance of transgenic plants to L. striatellus (4th instar; wild-type) infestation was firstly investigated. No significant resistance changes in oeSP1 plants were found when compared with WT plants (two-tailed unpaired Student's t test, p = 0.4880 in comparison group 1, p = 0.6704 in comparison group 2; Supplementary Fig. 19). Compared with dsGFP-treated controls, the treatment of L. striatellus with dsLsSP1 did not affect insect survivorship after feeding on oeSP1 plants (log-rank test, p = 0.9913 on oeSP1#1, p = 0.5715 on oeSP1#2; Supplementary Fig. 20). For fecundity analysis, the dsLsSP1treated L. striatellus produced less offspring than the dsGFP-treated control when feeding on WT plants (two-tailed unpaired Student's t test, p = 0.0299; Fig. 5a). However, this detrimental effect was not observed when dsLsSP1-treated L. striatellus fed on oeSP1 plants (twotailed unpaired Student's t test, p = 0.8771 in comparison group 1, p = 0.7670 in comparison group 2; Fig. 5a and Supplementary Fig. 18a). For honeydew excretion, the dsLsSP1-treated L. striatellus excreted less honeydew than the dsGFP-treated control when feeding on WT plants, although with no statistical significance (two-tailed unpaired Student's t test, p = 0.1047; Fig. 5b). There was also no significant difference in honeydew excretion between dsGFP-and dsLsSP1-treated L. striatellus when feeding on oeSP1 plants (two-tailed unpaired Student's t test, p = 0.3751 in comparison group 1, p = 0.9523 in comparison group 2; Fig. 5b and Supplementary Fig. 18b). EPG was subsequently used to monitor the insect feeding behavior on transgenic plants. Compared with dsGFP-treated controls, the dsLsSP1-treated L. striatellus exhibited a significant decrease in phloem sap ingestion when feeding on WT plants (two-tailed unpaired Student's t test, p = 0.0426 in comparison group 1, p = 0.0037 in comparison group 2; Fig. 5c and Supplementary Fig. 18c). Nevertheless, no significant difference in phloem sap ingestion was observed between dsGFP-and dsLsSP1-treated L. striatellus feeding on oeSP1 plants (two-tailed unpaired Student's t test, p = 0.8913 in comparison group 1, p = 0.9390 in comparison group 2; Fig. 5c and Supplementary Fig. 18c), indicating that overexpression of LsSP1 in rice plants rescued the feeding defects caused by a deficiency in LsSP1 secretion. To comprehensively illustrate the effects of LsSP1 on rice plants, transcriptomic analyses were performed on WT and oeSP1#1 plants that were untreated or infested by dsLsSP1-treated L. striatellus. DEGs between untreated and dsLsSP1-treated L. striatellus infested plants were compared, and totally 3396 and 1998 genes were identified in WT and oeSP1#1 plants, respectively (Supplementary Data 3-4). There were 2335 DEGs specifically identified in WT plants, but not in oeSP1#1 plants, and they were potentially correlated with LsSP1-associated responses. Enrichment analysis revealed that the majority of these genes were involved in plant hormone signal transduction, plantpathogen interaction, MAPK signal transduction, and amino acid metabolism (Fig. 5d). Among the 28 SA-related genes, 18 were differentially expressed in at least one comparison group, and 16 were significantly up-regulated after infestation (Fig. 5e). Interestingly, these up-regulated genes were induced to a lower extent in oeSP1#1 plants compared with those in WT plants (Fig. 5e), indicating that LsSP1 overexpression attenuated the L. striatellus-induced SA biosynthesis and SA response. Discussion Herbivorous insects have developed dynamic and complex interactions with host plants. Advanced understanding towards their underlying mechanisms will provide the fundamental knowledge for developing efficient pest management strategies. In this study, the role of salivary LsSP1 in its interaction with rice hosts was investigated. Using Y2H, BiFC and LUC assays, we showed that LsSP1 was secreted into plant tissues during feeding and directly interacted with the salivary sheath protein LsMLP. In yeast and N. benthamiana, LsSP1 interacted with multiple PLCPs in various subfamilies. LsSP1 knockdown led to a decrease in insect feeding and reduced insect reproduction on WT plants, but not on oeSP1 plants. Our results indicate that the salivary sheath protein LsSP1, although not essential for salivary sheath formation, is beneficial for insect performance. During the feeding process, herbivorous insects can secrete hundreds of proteins into plant tissues. Previously, most salivary proteins are investigated individually, and it is found that different salivary proteins from one species exerted diversified roles in insectplant interactions 36,37 . For example, in M. persicae, the overexpression of salivary protein Mp10 activates multiple defense pathways in N. benthamiana plants and reduces aphid performance 36,38 . However, the overexpression of another salivary protein Mp55 increases the attraction of N. benthamiana plants to aphids, and promotes aphid performance 37 . L. striatellus can successfully ingest rice phloem saps with limited plant defenses. However, when LsMLP was overexpressed, elevated accumulation of H 2 O 2 was detected ( Supplementary Fig. 15), which was in contradiction with the actual feeding situation. Therefore, there must exist other salivary components responsible for attenuating the LsMLP-induced plant defenses or masking the LsMLP signal. To the best of our knowledge, no such case has been reported in this insect species, although several proteins in aphids and mirid bug are found to be capable of inhibiting the plant defenses triggered by bacterial flag22 or oomycete INF1 [38][39][40] . Our study demonstrated that LsSP1 bound to LsMLP directly, providing clues that LsSP1 may prevent the activation of plant defenses by masking the LsMLP, which deserves further investigation. Apoplastic PLCPs act in the front line of plant immunity against a wide range of pathogens, including fungi, bacteria, and oomycetes 41 . Depletion or knockdown of proteases such as Rcr3, RD19, and Pip1 significantly decreases the plant susceptibility to the invading pathogens [42][43][44] . In maize, PLCPs are required to release the bioactive Zip1, a small peptide that activates SA signaling 45 . In turn, Zip1 release will enhance PLCP activity, thereby establishing a positive feedback loop and promoting the SA-mediated defenses 45 . Our study demonstrated that rice genes related to SA signaling were differentially expressed when infested by L. striatellus, and OsOryzain was significantly induced upon SA treatment and L. striatellus infestation ( Supplementary Figs. 11 and 12). This result might be an indicator that OsOryzain is regulated through SA pathway. SA signaling plays an important role in the rice defense against planthoppers 33 . The transcript level of OsOryzain reached a peak at 6 h-post L. striatellus infestation, while a peak was reached at 12 h-post SA treatment (Supplementary Fig. 11a). The different induction patterns indicated that other factors, in addition to SA pathway, might also be responsible for OsOryzain expression, which deserves further investigation. Although our study showed interaction between LsSP1 and OsOryzain in Y2H assays (Supplementary Note 1 and Supplementary Fig. 7), rice plants knockout of OsOryzain cannot well rescue the feeding defects caused by a deficiency in LsSP1 secretion as that LsSP1 overexpressing plants did (Supplementary Note 5, Fig. 5, and Supplementary Fig. 21). This might be explained by complex interactions between effectors and different plant defense actors. As a case in Phytophthora, the multifunctional effector Avrblb2 can neutralize host defense proteases via targeting PLCPs 28 , and suppresses defense associated Ca 2+ signaling pathway by interacting with host calmodulin 46 . For salivary LsSP1, it targets multiple PLCPs belonging to different subfamilies (Supplementary Fig. 8). The knockout of OsOryzain alone cannot inhibit plant defenses initiated by other PLCPs. In addition, LsSP1 is capable of interacting with other plant and insect proteins ( Fig. 3 and Supplementary Table 1). The salivary LsSP1 potentially exerts multiple roles during insect feeding, and affects plant defense in other ways independent of PLCPs, which deserves further investigation. Insects and plants The L. striatellus strain was originally collected from a rice field in Ningbo China. The insects and rice plants were maintained in a climate chamber at 25 ± 1°C, with 70-80% relative humidity, and a light/dark photoperiod of 16/8 h. Two rice varieties (cv. ASD7 and Nipponbare) were used in this study. The resistant variety ASD7, which contained the brown planthopper resistance gene BPH2, was also reported to confer resistance to small brown planthopper 47,48 , and was extensively applied for insect bioassays. As the transgenic rice plants generated in this study were of Nipponbare background, wild-type Nipponbare plants were used as a control. Therefore, the rice variety used in transgenic rice plant analysis was Nipponbare. For the rest of riceassociated experiments, ASD7 plants were used. In addition, N. benthamiana plants were kept in a growth chamber at 23 ± 1°C under a light/dark photoperiod of 16 h/8 h. Analysis of genes abundantly expressed in salivary glands The top 100 genes that were abundantly expressed in L. striatellus salivary glands were reported in our previous study 30 . To identify the potential planthopper-specific genes, these 100 genes were first subject to BLAST search against the predicted proteins in Acyrthosiphon pisum; 49 Bemisia tabaci; 50 Riptortus pedestris; 51 Homalodisca vitripennis 52 , and Drosophila melanogaster 53 , with a cutoff E-value of 10 -5 , respectively. Genes with no homology in the above species were subsequently searched against the NCBI nr database. Only genes with distributions restricted to three planthoppers (L. striatellus, S. furcifera, and N. lugens) were defined as the planthopper-specific genes. Thereafter, the expression patterns of top 100 genes in different tissues were investigated based on the transcripts per million (TPM) expression values. The TPM expression values of L. striatellus genes were generated by analyzing the transcriptomic data of salivary gland, gut, fat body, carcass, testis, and ovary, and used in our laboratory to preliminarily investigate the gene expression patterns. The TPM expression values of the top 100 genes were displayed in Supplementary Data 1. For the identification of salivary gland-specific genes, TPM of each gene in salivary gland was compared with that in the other five tissues, respectively. Afterwards, the generelative abundance (ratio) in each comparison group was calculated. Genes with fold changes>10 in all comparison groups were considered to be salivary gland-specific genes. L. striatellus infestation and SA treatment To investigate the effect of L. striatellus and SA on rice defense, the 4-5leaf stage rice seedlings were sprayed with 0.5 μM SA (#84210, Sigma-Aldrich, St. Louis, MO, USA) or infested by 5th instar L. striatellus nymph (5 nymphs per plant, confined in a 5-cm plant stems with a plastic cup). The treated plants were maintained in a climate chamber at 25°C, and samples were collected at indicated time points. Quantitative real-time PCR analysis Different tissue samples from carcasses (20), fat bodies (50), guts (50), and salivary glands (80) were dissected from the 5th instar nymphs in a phosphate-buffered saline (PBS) solution (137 mM NaCl, 2.68 mM KCl, 8.1 mM Na 2 HPO 4 and 1.47 mM KH 2 PO 4 at pH 7.4) using a pair of forceps (Ideal-Tek, Switzerland). Similarly, testes (50) and ovaries (20) were collected from adult male and female L. striatellus, respectively. The number of insects in each sample was given in the parentheses above. To extract RNA from N. benthamiana and rice, plants were firstly grinded with liquid nitrogen. Then, samples were homogenized in the TRIzol Total RNA Isolation Kit (#9109, Takara, Dalian, China), and total RNA was extracted following the manufacturer's protocols. Afterwards, the first strand cDNA was reverse-transcribed from RNA using HiScript II Q RT SuperMix (#R212-01, Vazyme, Nanjing, China). qRT-PCR was subsequently run on a Roche Light Cycler® 480 Real-Time PCR System (Roche Diagnostics, Mannheim, Germany) using the SYBR Green Supermix Kit (#11202ES08, Yeasen, Shanghai, China). The PCR procedure was as follows, denaturation for 5 min at 95°C, followed by 40 cycles at 95°C for 10 s and 60°C for 30 s. The primers used in qRT-PCR were designed using Primer Premier v6.0 (Supplementary Table 3). L. striatellus actin, O. sativa actin, and N. benthamiana actin were used as internal controls, respectively. The relative quantitative method (2 −ΔΔCt ) was employed to evaluate the quantitative variation. qRT-PCR results with a Ct value ≥35 were regarded that the gene was not expressed in the sample. Three independent biological replicates with each repeated twice were performed. RNA interference The DNA sequences of target genes were amplified using the primers listed in Supplementary Table 3, and cloned into pClone007 Vector (#TSV-007, Tsingke, Beijing, China). The PCR-generated DNA templates containing T7 sequence was used to synthesize the doublestranded RNAs with a T7 High Yield RNA Transcription Kit (#TR101-01, Vazyme). RNA interference experiment was conducted as previously described 55 . Briefly, insects were anaesthetized with carbon dioxide for 5-10 s. Then, dsRNA was injected into the insect mesothorax using FemtoJet (Eppendorf-Netheler-Hinz, Hamburg, Germany). Afterwards, insects were kept on the 4-5-leaf stage rice seedlings for 24 h and the living insects were selected for further investigation. Silencing efficiency was determined at 4th day post-injection using qRT-PCR method as described above. Insect bioassays To perform survivorship analysis, a group of 30-40 treated insects (3rd instar nymph) were treated with dsRNA and kept on 4-5 leaf stage rice seedlings in a climate chamber. The mortality rates for each treatment were recorded for ten consecutive days. Three independent replications were performed. For honeydew analysis, a parafilm (Bemis NA, Neenah, WI, USA) sachet was attached to the host plant stems, and the insects (5th instar nymph) were confined in a sachet. At 24 h after feeding, the accumulation of honeydew was measured by weighing the parafilm sachet before and after feeding with an electronic balance (accuracy, 0.001 g; Sartorius, Beijing, China). At least 10 replicates were performed for each treatment. For fecundity analysis, the newly emerged adults were treated with dsRNA. One day later, the insects were paired and allowed for oviposition for 10 days. Afterwards, the number of hatched offspring was counted. At least 10 replicates were conducted for each treatment. Host choice test The 4-5-leaf stage rice seedling was first placed in a glass tube, and 5 dsRNA-treated L. striatellus (5th instar) were allowed to feed on one rice plant for 24 h. Thereafter, the insects were removed, and rice plants pre-infested by different dsRNA-treated L. striatellus were confined in a plastic cup (diameter, 6 cm; height, 10 cm), where a release chamber was contained. Later, a group of 17 L. striatellus (4th instar; wild-type, WT) were placed in the release chamber. The numbers of insects settling on each plant were counted at 1, 3, 6, 12, 24, 36, and 48 h, respectively. At least ten replicates were performed. EPG recording analysis The GiGA-8d EPG amplifier (Wageningen Agricultural University, Wageningen, The Netherlands) with a 10 TΩ input resistance and an input bias current less than 1 pA was used for EPG recording. Briefly, the dsRNA-treated L. striatellus (5th instar) were reared on filter paper Article https://doi.org/10.1038/s41467-023-36403-5 with only water provided for 12 h. After anesthetizing by CO 2 for 10 s, a gold wire (Wageningen Agricultural University, diameter, 20 mm; length, 5 cm) was applied in connecting insect abdomen and the EPG amplifier with a water-soluble silver conductive glue (Wageningen Agricultural University). The plant electrode was designed by inserting a copper wire (diameter, 2 mm; length, 10 cm) into soils that were planted with one rice plant. Later, EPG recording was conducted for 8 h in a Faraday cage (120 cm × 75 cm × 67 cm, Dianjiang, Shanghai, China), with the gain of the amplifier being set at 50× and the output voltage being adjusted between −5V and +5 V. Immunohistochemistry staining To prepare insect tissues, salivary glands were dissected from L. striatellus and fixed in 4% paraformaldehyde (#E672002, Sango Biotechnology, Shanghai, China) for 30 min. To prepare salivary sheath sample, the parafilm attached with salivary sheath was washed in PBS and fixed in 4% paraformaldehyde for 30 min. To prepare plant tissues, the rice plants infested by L. striatellus were collected and cut into segments~3 cm in length using a scalpel. Then, the short rice sheaths were fixed in 4% paraformaldehyde and vacuumized at 4°C. Afterwards, the sheaths were blocked with Jung Tissue Freezing Medium (#020108926, Leica Microsystems, Wetzlar, Germany) at −40°C. Later, the blocks were cut into 20 μm cross-sections using Cryostar NX50 (Thermo Scientific, Waltham, MA), and fixed in 4% paraformaldehyde for the additional 30 min. The anti-LsSP1 serum, prepared by immunizing rabbits with purified GST-LsSP1 proteins, was produced via the custom service of Huaan Biotechnology Company (Hangzhou, China). The anti-OsOryzain serum, prepared by immunizing rabbits with peptides VRMERNIKASSGKC and DVNRKNAKVVTIDSY, was produced via the custom service of Genscript Biotechnology Company (Nanjing, China). The anti-LsSP1 serum was conjugated with Alexa Fluor™ 488 NHS Ester (#A20000, ThermoFisher Scientific), while the anti-OsOryzain serum was conjugated with Alexa Fluor™ 555 NHS Ester (#A37571, ThermoFisher Scientific) following the manufacturer's protocols. Thereafter, the insect/plant/parafilm samples were incubated with the above fluorophore-conjugated serums overnight at 4°C with a dilution of 1:200, the actin dye phalloidinrhodamine (#A22287, ThermoFisher Scientific) at room temperature with a dilution of 1:500 for 30 min, and 4′,6-diamidino-2-phenylindole (DAPI) solution (#ab104139, Abcam, Cambridge, USA). Finally, fluorescence images were obtained using a Leica confocal laser-scanning microscope SP8 (Leica Microsystems). Preparation of protein samples Salivary sheath samples and watery saliva samples were collected from 900 to 1000 nymphs as previously described 14,56 . Briefly, the 5th instar L. striatellus nymphs were transferred from the rice seedlings into a plastic Petri plate. Approximately 300 μl diets with 2.5% sucrose were added between two layers of stretched Parafilm, and the insects were allowed to feed for 24 h. Ten devices were used for saliva collection, with each device containing 90-100 L. striatellus. For the preparation of watery saliva samples, the liquid was collected from the space between two layers of Parafilm. To prepare salivary sheath samples, the upper surface of Parafilm with salivary sheath firmly attached was carefully detached, and washed in PBS thrice. As salivary sheath was difficult to dissolve, a lysis buffer of 4% 3-[(3-cholamidopropyl)-dimethylammonio]-1-propanesulfonate (#20102ES03, Yeasen), 2% SDS (#A600485, Sango Biotechnology) and 2% DTT (#A100281, Sango Biotechnology) was adopted for obtaining the solubilized salivary sheath proteins under gentle shaking on an orbital shaker at room temperature for 1 h according to previous description 14,56 . With this method, the majority of salivary sheath, although not all, can be dissolved 56 . Since it was difficult to quantify the protein concentration in saliva solution, the salivary sheath samples and watery saliva samples was pooled to 50 μl using 3-kDa molecular-weight cutoff Amicon Ultra-4 Centrifugal Filter Device (Millipore, MA, USA), respectively. The rice apoplast was collected with Buffer A (consisting of 0.1 mol/L Tris-HCl, 0.2 mol/L KCl, 1 mmol/L PMSF, pH 7.6) as previously described 57 . Briefly, 5.0 g rice plants were vacuum infiltrated with Buffer A for 15 min. Then, the remaining liquid on the surface was dried with the absorbent paper, placed inside the 1-ml tips and centrifuged in the 50-ml conical tubes at 1000 × g for 20 min. The apoplastic solution was concentrated using 3-kDa molecular-weight cutoff Amicon Ultra-4 Centrifugal Filter Device. For the preparation of insect and plant samples, the insects/plants were collected at indicated time points and homogenized in the RIPA Lysis Buffer (#89900, ThermoFisher Scientific). To detect the secretion of LsSP1 into rice plants, approximately one hundred 5th instar nymphs were confined in the 2-cm stem and allowed to feed for 24 h. The outer rice sheath was collected for western-blotting assay. Western-blotting assay The protein concentrations were quantified using a BCA Protein Assay Kit (#CW0014S, CwBiotech, Taizhou, China) in line with the manufacturer's instructions. After the addition of 6× SDS loading buffer, the protein samples were boiled for 10 min. Proteins were separated by 12.5% SDS-PAGE gels and transferred to PVDF membranes. Then, the blots were probed with anti-LsSP1 serum or anti-OsOryzain serum diluted at 1:5000, followed by additional incubation with horseradish peroxidase (HRP)-conjugated goat anti-rabbit IgG antibody (1:10,000, #31460, ThermoFisher Scientific). Images were acquired by an AI 680 image analyzer (Amersham Pharmacia Biotech, Buckinghamshire, UK). The band intensities in immunoblot analyses were quantified using ImageJ software v1.53e (https://imagej.nih.gov/). To monitor the equal protein loading, samples were further stained with Coomassie brilliant blue (CBB). The full scan results of blots and gels were provided in Supplementary Fig. 23 and Source Data file. Identification and phylogenetic analysis of PLCP The O. sativa PLCPs were investigated based on the procedure described previously 25 . Briefly, amino acid sequences of 31 Arabidopsis thaliana PLCPs 58 were retrieved and used as queries to search for PLCP homologs in the Rice Genome Annotation Project Database (http://rice. plantbiology.msu.edu), with a cutoff e-value of 10 −5 . The putative PLCPs were further validated by aligning to the NCBI nr database. Thereafter, the structure and conserved domains of PLCPs were analyzed by InterPro. Seven proteins predicted in Rice Genome Annotation Project Database were incomplete, including Os04g55650 (NP_001389372), Os09g39160 (BAD46641), Os09g39090 (XP_015611357), Os09g39170 (BAD46642), Os09g39120 (XP_015611254), Os01g24570 (BAD53944), and Os07g01800 (BAC06931). The complete sequences were retrieved from the NCBI database by BLAST search, and the corresponding GenBank accessions were provided in the brackets. For phylogenetic analysis, all PLCPs were aligned with MAFFT v7.450, and the gaps were further trimmed using Gblock v0.91b 59 . The substitution model was evaluated using ModelTest-NG based on the default parameters 60 . Afterwards, maximum likelihood (ML) trees were constructed using RAxMLNG v0.9.0 with 1000 bootstrap replications 61 . Scanning electron microscopy Insects were allowed to feed on rice plants or artificial diets for 24 h. The rice plant and parafilm attached with salivary sheath were cut and washed with PBS. Later, SEM samples were attached to a stub and dried in a desiccator under vacuum. After gold-sputtering, the samples were observed by SEM TM4000 II plus (Hitachi, Tokyo, Japan). The length of salivary sheath on the Parafilm membrane was measured from the top to base of salivary sheath ( Supplementary Fig. 4a), while the number of salivary sheaths left on the rice surface was measured by counting the ring-shaped salivary sheath structure ( Supplementary Fig. 4c) on 4-cm rice stem. Agrobacterium-mediated plant transformation and diaminobenzidine staining Details in Agrobacterium-mediated plant transformation in N. benthamiana and diaminobenzidine staining of N. benthamiana leaves were described in Supplementary Methods. Protein-protein interaction assays Details in the Y2H screening assay, Y2H point-to-point verification assay, GST pull-down assay, BiFC assay, luciferase complementation (LUC) assay, and OsOryzain-salivary sheath binding assay were described in Supplementary Methods. Generation of transgenic rice plants To generate the oeSP1 plants, the coding sequence (without signal peptide) was amplified and cloned into the binary expression vector driven by a CaMV 35S promoter. The recombinant vector was introduced into A. tumefaciens strain EHA105 by the heat transfer method. Transgenic rice plants were generated through Agrobacterium-mediated transformation. Briefly, rice seeds (cv. Nipponbare) were sterilized with 75% ethanol for 1 min and 50% sodium hypochlorite for 20 min. After washing in sterile water for three times, the sterilized seeds were transferred onto NBi medium (N6 macro elements, B5 microelements, B5 vitamin, 27.8 mg/L FeSO 4 · 7H 2 O, 37.3 mg/L Na 2 -EDTA, 500 mg/L proline and glutamic acid, 300 mg/L casein hydrolyte, 2 mg/L 2,4-dichlorophenoxyacetic acid, 100 mg/L inositol, and 30 g/L sucrose) for 20 days at 26°C for callus induction. The induced calli were incubated with Agrobacterium (OD 600 = 0.2) for 10 min, and then cultured in NBco medium (NBi medium supplied with 100 µmol/L acetosyringone, pH 5.5) for 3 days at 20°C. After washing with sterile water, the calli were transferred onto NBs medium (NBi medium supplied with 500 mg/L cephamycin and 30 mg/L hygromycin) for 25 days. Subsequently, the resistant calli were transferred onto NBr medium (NBi medium supplied with 0.5 mg/L α-naphthalene acetic acid, 3 mg/L 6-benzylaminopurine, 500 mg/L cephamycin, and 30 mg/L hygromycin) for shoot regeneration. The regenerated shoots were transferred into 1/2× Murashige-Skoog medium for rooting. The transgenic plants were grown in the greenhouse, and was confirmed by RT-PCR with reversetranscribed cDNA as the template using LsSP1-specific primers (Supplementary Table 3). Two independent T3 homozygous overexpression lines (Supplementary Fig. 17a) were used for subsequent experiments. Evaluation of L. striatellus resistance in transgenic rice plants The L. striatellus resistance in rice plants was scored as previously described 62,63 . Briefly, five rice seedlings were grown in a 10-cmdiameter plastic cup with a hole at the bottom. At the 4-5 leaf stage, the seedlings were infested with L. striatellus nymphs (4th instar; wildtype, WT) at a dose of 10 insects per seedling. After 20 days, the injury level of rice plants was checked, and the identification standard was adopted for calculating the average injury level 62 (Supplementary Table 4). Four replicates were performed for each line. Performance of dsRNA-treated L. striatellus on transgenic rice plants To investigate the performance of dsRNA-treated L. striatellus on transgenic rice plants, 3rd instar nymph (for survivorship analysis), 4th instar nymph (for honeydew and EPG analyses), and newly emerged adults (for fecundity analysis) were treated with dsGFP and dsLsSP1, respectively. Insect bioassays for survivorship, honeydew, fecundity, and EPG analyses were performed as described above. Two independent homozygous overexpression/knockout transgenic lines were used. Transcriptomic sequencing The untreated rice plants or rice plants infested by dsLsSP1-treated L. striatellus for 24 h were collected and homogenized in the TRIzol Reagent (#10296018, Invitrogen, Carlsbad, CA, USA). Thereafter, total RNA was extracted according to the manufacturer's instructions, and the RNA samples were sent to Novogene Institute (Novogene, Beijing, China) for transcriptomic sequencing as previously described 64 . Briefly, poly(A) + RNA was purified from 20 μg pooled total RNA by using oligo(dT) magnetic beads. Fragmentation was implemented in the presence of divalent cations at 94°C for 5 min. Then, N6 random primers were used for reverse transcription into the double-stranded complementary DNA (cDNA). After end-repair and adapter ligation, the products were amplified by PCR and purified using a QIAquick PCR purification kit (Qiagen, Hilden, Germany) to create a cDNA library. The library was sequenced on an Illumina NovaSeq 6000 platform. Thereafter, all sequencing data generated were submitted to the NCBI Sequence Read Archive under accession number PRJNA833487 and PRJNA815455. Analysis of transcriptomic data The output raw reads were filtered using the internal software, and the clean reads from each cDNA library were aligned to the reference sequences in Rice Genome Annotation Project Database using HISAT v2.1.0 65 . The low-quality alignments were filtered by SAMtools v1.7 66 . Transcripts per million (TPM) expression values were calculated using Cufflink v2.2.1 67 . The DESeq2 v2.2.1 68 was adopted for analyzing the DEGs, and genes with log2-ratio > 1 and adjusted p value < 0.05 were identified. To reveal overall differences in gene expression patterns among different transcriptomes, R function plotPCA (github.com/ franco-ye/TestRepository/blob/main/PCA_by_deseq2.R) and DNAstar v8.0 69 were used to perform PCA analysis and correlation analysis, respectively. KEGG enrichment analyses were performed using TBtools software v1.0697 70 Statistical analysis The log-rank test (SPSS Statistics 19, Chicago, IL, USA) was applied to determine the statistical significance of survival distributions. Twotailed unpaired Student's t test (comparisons between two groups) or one-way ANOVA test followed by Tukey's multiple comparisons test (comparisons among three groups) was used to analyze the results of qRT-PCR, EPG, proteolytic activity, honeydew measurement, offspring measurement, and host choice analysis. The exact p value of each statistical test was provided in Source data file. Data were graphed in GraphPad Prism 9. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The sequencing data generated in this study have been deposited in the NCBI Sequence Read Archive under accession number PRJNA833487 and PRJNA815455. The TPM expression values of all genes generated from sequencing data can be found in Source data file. Sequence data can be found in GenBank under the following accession numbers: LsSP1, ON322955; NlSP1, ASL05017; SfSP1, ON322954; OsOryzain, NP_001389372. zingipain-2 Os09g39090, XP_015611357; putative cysteine proteinase Os09g39170, BAD46642; ervatamin-B Os09g39120, XP_015611254; putative cysteine protease Os01g24570, BAD53944; and Os07g01800, BAC06931. The O. sativa reference genome was public available in Phytozome (https://data.jgi.doe.gov/refine-download/phytozome?) organism = Osativa&expanded=323). PLCP accessions were listed in Supplementary Fig. 8 and the corresponding sequences can be found in Source data file. Sequences of top 100 genes that abundantly expressed in L. striatellus salivary glands can be found in Source data file. Source data are provided with this paper.
2023-02-10T15:03:10.908Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "6c45d8fc7e503f54856b79323c730850e76e5ec5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "6c45d8fc7e503f54856b79323c730850e76e5ec5", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
243991382
pes2o/s2orc
v3-fos-license
‘Asking for help’: a qualitative interview study exploring the experiences of interpersonal counselling (IPC) compared to low-intensity cognitive behavioural therapy (CBT) for women with depression during pregnancy Background Treating depression early in pregnancy can improve health outcomes for women and their children. Current low-intensity psychological therapy for perinatal depression is a supported self-help approach informed by cognitive behavioural therapy (CBT) principles. Interpersonal counselling (IPC) may be a more appropriate low-intensity talking therapy for addressing the problems experienced by pregnant women with depression. A randomised feasibility trial (ADAGIO) has compared the acceptability of offering IPC for mild-moderate antenatal depression in routine NHS services compared to low-intensity CBT. This paper reports on a nested qualitative study which explored women’s views and expectations of therapy, experiences of receiving IPC, and Psychological Wellbeing Practitioners (PWPs - junior mental health workers) views of delivering the low-intensity therapy. Methods A qualitative study design using in-depth semi-structured interviews and focus groups. Thirty-two pregnant women received talking therapy within the ADAGIO trial; 19 contributed to the interview study from July 2019 to January 2020; 12 who had IPC and seven who had CBT. All six PWPs trained in IPC took part in a focus group or interview. Interviews and focus groups were recorded, transcribed, anonymised, and analysed using thematic methods. Results Pregnant women welcomed being asked about their mental health in pregnancy and having the chance to have support in accessing therapy. The IPC approach helped women to identify triggers for depression and explored relationships using strategies such as ‘promoting self-awareness through mood timelines’, ‘identifying their circles of support’, ‘developing communication skills and reciprocity in relationships’, and ‘asking for help’. PWPs compared how IPC differed from their prior experiences of delivering low-intensity CBT. They reported that IPC included a useful additional emotional component which was relevant to the perinatal period. Conclusions Identifying and treating depression in pregnancy is important for the future health of both mother and child. Low-intensity perinatal-specific talking therapies delivered by psychological wellbeing practitioners in routine NHS primary care services in England are acceptable to pregnant women with mild-moderate depression. The strategies used in IPC to manage depression, including identifying triggers for low mood, and communicating the need for help, may be particularly appropriate for the perinatal period. Trial registration ISRCTN 11513120. 02/05/2019. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-021-04247-w. Background Early detection of depression during pregnancy is important because depression can adversely affect birth outcomes and neonatal health, and if untreated can persist postnatally. Treating depression early in pregnancy can improve mother-infant attachment, and the cognitive, emotional, and behavioural outcomes for children [1]. UK guidelines recommend that midwives screen for antenatal depression at the woman's first midwife appointment using depression screening questions and regularly ask women about their current mental health during pregnancy [2]. Although qualitative studies have shown that both midwives and women regard screening for antenatal depression as acceptable and important [3], screening alone is not enough and referral to appropriate treatment is important. There is limited evidence for the effectiveness of nonpharmacological psychological interventions for antenatal depression [4] and which treatments might be most appropriate. This is especially true for low-intensity therapies [5] and is important in healthcare systems that have adopted stepped-care approaches where the first offer is typically a low-intensity therapy. Current talking therapy for mild to moderate depression during pregnancy as provided by Improving Access to Psychological Therapies (IAPT), a national steppedcare, primary healthcare psychological service in England, is a supported self-help approach informed by cognitive behavioural therapy (CBT) principles [2]. However, only offering low-intensity CBT for pregnant women may be problematic, because some studies have reported difficulties enrolling and/or retaining pregnant or postpartum women in CBT [6][7][8]. This may be because the CBT approach is not relevant or addressing the specific problems of the perinatal period [9]. O'Mahen's qualitative study highlighted that women in the perinatal period struggled with internalization of "motherhood myths, " self-sacrifice, and managing social support during this period [9]. CBT focusses on people's thoughts and behaviours, and this may miss difficulties with emotion around transitions, difficulties around communication and support (highlighted as important by women), and issues with complicated grief [9]. Thus, a significant number of women may miss the opportunity to engage with treatments that are meaningful or hold face validity for them. Consequently, several recent studies have modified CBT to improve feasibility and acceptability among pregnant women [10][11][12][13]. Another promising talking therapy is Interpersonal Counselling (IPC), which is a low-intensity treatment derived from Interpersonal Psychotherapy (IPT) [14]. IPC may be more appropriate for addressing the problems that depressed women experience during pregnancy and postnatally. It helps individuals to develop useful strategies to manage depression in an interpersonal context and can involve a partner if appropriate. It focuses on strategies to manage changes in role, conflict, isolation and loss (such as miscarriage, stillbirth, previous loss of would-be grandparents), and the impact of these on relationships. However, there are limited data on the effectiveness and acceptability of IPC perinatally and no studies of IPC in pregnancy in the UK. A small feasibility study of IPC for antenatal depression in the US amongst low-income mothers indicated high satisfaction with IPC and some improvement in mood [15]. There are currently no studies comparing IPC with low-intensity CBT in the perinatal period. A randomised feasibility trial (ADAGIO) comparing the acceptability of offering IPC for antenatal depression in routine NHS primary care services in England compared to low-intensity CBT has recently been completed [16,17]. The ADAGIO trial was successful in recruiting pregnant women with mild-moderate depression. Treatment adherence was high (over 70% of women completed their IPC course to the satisfaction of the Psychological Wellbeing Practitioners (PWPs) delivering the therapy); women reported IPC was acceptable, and supervisors reported high treatment fidelity in IPC PWPs [17]. PWPs are junior mental health workers trained to assess and support people with common mental health problems (principally anxiety disorders and depression) in the management of their recovery. A nested qualitative study explored the views of participating pregnant women about their expectations of therapy and experiences of receiving IPC, and the views of the PWPs who delivered the low-intensity therapy. The aim of this paper is to understand the views of women and PWPs about these talking therapies in pregnancy, with a particular focus on IPC. strategies used in IPC to manage depression, including identifying triggers for low mood, and communicating the need for help, may be particularly appropriate for the perinatal period. Setting The ADAGIO feasibility trial was undertaken in two geographical sites in England. Twelve existing PWPs working for the IAPT services were recruited and six were trained to deliver IPC (three at each site), the other six received a short top-up in CBT using perinatal specific guided selfhelp. Differences between the boundaries of IAPT services and midwifery services in each location, meant that the population from which to identify potentially eligible women for the study was smaller (by approximately 50%) at one site. Participants The ADAGIO study recruited 52 pregnant women (12-26 weeks' gestation) with mild to moderate depression from January to September 2019. Participants were screened for depression using Edinburgh Depression Scale (EPDS) [18] score 10 or above, and ICD-10 mild or moderate depression determined by the Clinical Interview Schedule Revised (CIS-R) [19], a structured diagnostic computerised psychiatric interview. Randomisation was carried out remotely by the Bristol Clinical Trials Unit stratified by recruiting centre and minimised by parity (with random block sizes). Of those recruited, 42 participants provided follow-up data and 32 received their allocated talking therapy (either IPC or CBT). Women who received therapy (either IPC or CBT) were purposively sampled to approach for an interview to achieve a maximum variation sample in terms of study arm, maternal age and parity, and study site. Women were approached by telephone and email for interview by DJ (qualitative researcher) after they had completed their therapy and telephone interviews took place between July 2019 and January 2020. DJ also took detailed notes of several shorter telephone conversations with those who had declined or dropped out of therapy. DJ had met two-thirds of the women at recruitment to the trial and they knew that she was a member of the study team. Five of the PWPs who were trained in IPC were involved in online focus group discussions (led by JI and DJ -both experienced qualitative researchers) and one who could not attend was interviewed by DJ. Interview topic guides were informed by the research literature, team discussions and input from our Patient Advisory Groups. Interviews took between 25 and 75 min (median 50 mins) and the two focus groups were 60 and 70 min long. Analysis Thematic data analysis was carried out by trained qualitative researchers (DJ, LB, JI) who have extensive experience of qualitative research and evaluation of health care services, from psychology, health services research and midwifery backgrounds. Interviews and focus groups were audio-recorded, transcribed verbatim by a professional transcription service and anonymised. Analysis of the data was an ongoing and iterative process using NVivo 11 software to organise and code the transcripts [QSR International Pty Ltd]. Transcripts were initially coded by one qualitative researcher (DJ). Codes and themes were developed and discussed with the lead qualitative researcher (JI) at regular intervals during data collection and analysis to achieve consensus. Six interview transcripts were also read and coded by an independent qualitative researcher (LB) to compare and discuss the coding framework [20]. Interviews continued until data saturation was achieved, in that no new themes were arising from the data. All analytical decisions were shared and discussed by the qualitative research group using a consensus process to agree the final coding and thematic framework. The study received North of Scotland Research Ethics Committee (REC) approval on October 29th 2018 and Health Research Authority approval on November 14th 2018. Results A total of 19 women who had received therapy contributed to the qualitative study; 16 were interviewed (11 who had IPC; five had CBT) and three had telephone conversations (one IPC, two CBT) giving opinions from 12 who had IPC and seven who had CBT. All women who were approached for interview agreed, but two were not available for a while due to imminent induction of their babies. The women in the study had a mean age of 32.6 years (range 25-42); nine were expecting their first baby and 10 were expecting their second or subsequent baby. All six PWPs trained in IPC at both sites were interviewed. Quotes from women and PWPs are presented: women are identified by site and therapy arm; and PWPs by site. 1.Themes from all therapy interviews Overall, pregnant women were positive about receiving therapy through the ADAGIO study, welcomed being asked about their mental health in pregnancy, and having the chance to have support in accessing therapy. For some the study offered the first opportunity to acknowledge and explore their low mood and it was the only time they were offered help and treatment. "Just very grateful for the opportunity that I have had. I would never have thought of therapy … I just don't think I had the insight to do so, and my GP didn't offer anything like that either. " (#1021,site A) Themes from all the interviews are reported initially to describe the overall expectations of therapy as 'engagement with antenatal depression therapy' , 'tools for life' and 'PWP insights' . Engagement with therapy for antenatal depression Some women were unsure about how helpful the treatments might be, but a willingness to engage with their sessions enabled them to get the most out of either therapy, even if they were initially doubtful. Women's perspectives often changed throughout their course of treatment so that by completion, they were able to reflect upon the benefit of persisting with exercises and acknowledge positive changes that had occurred. Tools for life Women found both treatments focused on practical issues, and when they engaged with these they were pleased with their therapy. Both therapies offered 'tools for life' and women appreciated being given things to do or handouts that they could refer to later. "Accepting help was a big issue, … .. so she would say why don't you just try and accept help for this and see how it goes, and then she would say things like your homework this week is to think about more ways of self-care, those kinds of things". (#1018, site A, IPC) "The coping mechanisms make sense, and they were explained well, and they do work, I do believe they work if you practice them, so I did feel I got help. " (#1025, site A, CBT) Women in both therapy groups talked about being encouraged to do 'homework' between sessions to put strategies into practise that had been identified in their sessions. In CBT these appeared to be drawn from an existing set of exercises. "I think once I really tried to do the short exercises and homework, for want of a better word, I found them more and more useful as I went through. " (#2002, site B, CBT) Whereas IPC's approach provided a framework for identifying and making goals which helped women start focusing on specific issues that could be affecting their mood and that they 'wanted to tackle' . PWP insights PWPs commented that the task-orientated low-intensity CBT approach is more therapist-led and to them it lacked the emotional component of IPC, which they felt was more responsive to women's emotional experiences. IPC practitioners welcomed this new element in their sessions as they believed it enabled a more holistic approach to women's therapy. "I felt more 'with' the client I guess, I understood their emotional perspective. I understood how they were feeling in the room a little bit more, potentially because we were talking about emotions, and asking "how does that make you feel right now?", whereas with CBT we are very much focused on how do we use this technique, how can we use it at home. So, I did feel maybe slightly more emotional connection with the clients. " (PWP focus group, site A) Interpersonal counselling interview themes This more emotion-focussed approach of IPC helped pregnant women to identify triggers for low mood. These were facilitated through exploring interpersonal specific depression triggers, using exercises such as a timeline of depression, relationship mapping and circles of support. Women spoke of IPC in terms of working collaboratively with their practitioner to develop solutions to their issues often saying "we did this or that". Importantly they did not feel they were being told what to do, rather, they reported that they were helped to find strategies that would work for them, with their PWP sometimes suggesting ideas: "It definitely wasn't a case of her saying you need to do this, you need to do that, you very much get there together. " (#1018, site A, IPC) "I was worried that once we stopped seeing each other that actually I would become quite down and depressed after the baby is born, so we looked into that and we looked into the support group as to who could help the emotional side, who I know I can speak to, and she told me that I could go back to them at any point as well. " (#2005, site B, IPC) Focussing on views of women who received IPC (n = 12) and the PWPs (n = 6) who delivered it, the themes generated included 'promoting self-awareness through mood timelines' , 'circles of support' , 'communication skills and relationships' , and 'asking for help' . PWPs also compared delivering both therapies. Promoting self-awareness though mood timelines Women reported that the IPC strategies supported them to increase their self-awareness, identify their support networks, and learn to ask for help. IPC practitioners initiated the process by helping them to make mood timelines, which women felt helped them to recognise their triggers for low mood. Circles of support Identifying their support networks of people, who they could call upon, was an important step towards women starting to ask for help. They were encouraged to identify their 'circle of support' by creating a diagram with the practitioner which most found to be helpful: "I remember doing one exercise where you were encouraged to draw on all the people of support in your life, so to really look at who you would talk to, like your friends, your family, people at work and that sort of thing. So you had to draw a physical diagram, [ … ] equally I think that the goal of that type of therapy is that you are using all those supports and you realise it's okay to talk to those people about things that was going on. " (#1021, site A, IPC) " … having the circle, so knowing who is in the support group and [name] reflected on that actually, and there was a chart that she gave me where I could write who it was, my relationship with them, and what good they bring me, and can I rely on them for emotional and physical support, that was really helpful to go through, to know who I had and who I could rely on. " (#2005, site B, IPC) Communication skills and relationships Women were encouraged to work specifically on developing their communication skills, identifying problem areas in their relationships with others, recognising the reciprocal nature of communication and helping them to try out different approaches with the aim of improving the way they manage such interactions. Both women and PWPs were able to see the benefit of working on these issues, which resulted in very positive changes for some. "Probably all the stuff about how you communicate and the words you use rather than … and how that might make the other person feel or be defensive, and that was all quite positive. " (#1014, site A, IPC) "There was freedom and a different focus, still depression, but there was focus on relationship that's not really the main thing in CBT, and I think a lot of clients that I worked with found that helpful and having space to talk about things a bit more freely it seemed like it was helpful. " (PWP focus group, site A). Asking for help Reaching out to others was difficult for many women as it involved acknowledging their low mood, possibly for the first time, and then opening up to other people to ask for help. Most women admitted finding it difficult to seek or accept help and it seemed that working through this in IPC could be powerful in enabling positive changes in women's behaviour. Comparison of IPC with CBT Most PWPs delivering IPC were surprised at how different it was from low-intensity CBT, and initially felt under-prepared to deliver the new therapy. However, they gained confidence with each participant and ultimately reported that they enjoyed the opportunity to try out a different approach which they felt was more woman-centred and highly pertinent to the perinatal period. "I was expecting something completely different, so there were some elements of the structure that reminded me of CBT, but I would say IPC allowed more freedom, more space for building a therapeutic relationship that maybe CBT at times lacks, especially if you focus quite rigidly on everything. " (PWP focus group, site A) "It was definitely nice to be able to experience a bit more freedom when talking with people and bringing emotions much more into the room. I think CBT doesn't ignore those things, but it doesn't actively talk about them, and so it's quite nice to be able to check in every single week with actual emotions and to give someone that space. " (PWP focus group, site A) " … I can see a role in perinatal/antenatal period, with the communication, just some of those really simple things we were doing with the communication it can make a real difference at that low intensity level. " (PWP focus group, site B) The relationship between the PWP and client was different between the two therapies, requiring PWPs to adapt to be more responsive to issues women wanted to talk about in IPC. Most welcomed the opportunity to learn the different approach IPC demanded of them and enjoyed being able to encourage women to talk about their feelings. Discussion This study has highlighted that it is possible to deliver low-intensity interpersonal counselling for depression during pregnancy in large community settings. Women and practitioners liked IPC and found it to be very relevant to the perinatal context. They particularly highlighted that it helped to identify triggers for depression and communicating the need for help. They valued the exploration of relationships using strategies such as a mood timeline, relationship mapping and circles of support. PWPs welcomed the opportunity to learn a more emotion-focussed approach to treating pregnant women. This qualitative study was part of a feasibility trial of IPC compared to CBT and the interviews also aimed at assessing the acceptability of the new therapy. The findings will inform the delivery of the trial processes in a larger trial which will focus on effectiveness and cost-effectiveness. Other studies exploring views of perinatal mental healthcare within IAPT have shown that women reported positive experiences of receiving support from IAPT for perinatal mental health difficulties. IAPT services are encouraged to prioritise perinatal women so that they can be offered timely help with their depression. However in some studies, both women and therapists have highlighted issues relating to barriers to access and a need to tailor therapy to the perinatal context [21]. Finding that CBT provided by low-intensity practitioners is not always relevant to perinatal depression led O'Mahen [9] and others to produce modified tailored training packages to address the perinatal-specific concerns relating to self, motherhood, and interpersonal domains of CBT. These concerns, which notably centred around women's interpersonal skills and problems, rippled out to affect their negative thoughts and behaviours as well as their resilience and efficacy behaviours. IPC, which targets interpersonal domains, may be ideally appropriate for the concerns which depressed pregnant women express. Previous evaluations of IPC have focussed on efficacy rather than qualitative views of acceptability. However, one qualitative study has shown that it is likely to be an effective and acceptable treatment for young people with primarily depressive symptoms seen in local authority non-specialist mental health services [22]. Participants described specific advantages of IPC over standard counselling, including practical help, the use of goals, psychoeducation and integrating a self-rated questionnaire into treatment [22]. In our qualitative study, women also highlighted the benefits of identifying specific depression triggers, using exercises such as a timeline of depression, relationship mapping and circles of support. Another small US trial using brief-IPT compared to treatment as usual for perinatal depression showed that it was acceptable to low-income women and helpful for improving depressive symptoms and social support. However, there was relatively low session attendance in that trial, which limited the interpretation of the study results [15]. In our study over 70% of women completed their IPC course to the satisfaction of the PWP delivering the therapy. Strengths of our qualitative study are being able to include almost 60% of the pregnant women in the trial, who received talking therapy, through interviews and detailed phone conversations, and 40% of those who did not receive therapy. The junior mental health practitioners (PWPs) also provided valuable insights into delivering IPC compared to their usual low-intensity CBT sessions.
2021-11-12T14:21:22.398Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "4f3df86bdfe8c222e6c942f94259bb6231e21716", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-021-04247-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01c0ebf2e5fe02b61992dc1c1d620d0e799d1f8d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
125417538
pes2o/s2orc
v3-fos-license
Flow-induced vibration of curved pipe conveying fluid by a new transfer matrix method ABSTRACT A new transfer matrix method based on the Laplace transform is proposed to analyze the flow-induced vibration of curved pipe conveying fluid. After the comparison with the existing literature, the proposed method is verified to be of high accuracy in calculating critical flowing velocity. Three examples including cantilevered, clamped-elastically supported, and periodic cantilevered curved pipes are investigated by the proposed method, natural frequency as a function of the flowing velocity for the former two cases and critical velocity for all of them are calculated not only to obtain some discoveries never mentioned in other literatures, but also to show its broad applications. For the first time, it is pointed out that the steady combined force should be considered in an appropriate interval during the calculation of clamped-elastically supported curved pipe. The method can also be radiated to study other forms of vibration problems concerning fluid conveying pipes or other problems characterized by the chain structure. Introduction Fluid conveying pipe plays significant role in the modern industry, such as oil and natural gas transportation system, hot leg piping system of nuclear reactor, liquid fuel propelling system of rocket and oil feeding system of vehicles, etc., leading to the relevant fluid-structure interaction vibration problem has now been attracting increasing attention, especially in latest decades (Enz & Thomsen, 2011;Gao, Zhang, Liu, Sun, & Tian, 2018;Guo, Zhang, & Paidoussis, 2010;Isenmann et al., 2016;Karami & Farid, 2015;Wang, Liu, Ni, & Wu, 2013) from both researchers and producers, as pointed out by Paidoussis (2008), dynamics of pipe conveying fluid has become a model dynamical problem. Due to its broad applications, attaining its accurate dynamic properties appears inevitably of importance to prevent some undesirable responses and to improve the system's safety. Generally speaking, there are two focused aspects concerning fluid-structure interaction vibration problem of fluid conveying pipe, one is the development of the mathematic model, the other is the exploration of the calculation method. With the rapid development of computer technology in recent years, many methods have been springing up to solve such a problem characterized by fluid-structure interaction, e.g. Van (1988a, 1988b) investigated CONTACT Qianli Zhao zql20081841@163.com dynamics of curved pipe by finite element method, in addition, proposed three theories with respect to the centerline at the same time, i.e. inextensible theory, modified inextensible theory, and extensible theory, respectively. Wang, Ni, and Huang (2007), and Wang and Ni (2008) separately resorted to differential quadrature method and its generalized form to solve dynamic problems of cantilevered curved pipe with motion constraints and curved pipe with both ends supported. Ni, Zhang, and Wang (2011) used the differential transformation method to study natural frequency as a function of flowing velocity of straight pipe under four typical supporting types, i.e. cantilevered, pinned-pinned, clamped-pinned, and clamped-clamped, respectively. Green's function method was adopted by Li and Yang (2014) to solve the forced vibration of fluid conveying straight pipe with various boundary conditions. Zhao and Sun (2017) then expanded the same method to investigate the forced vibration problem of curved pipe conveying fluid with both ends supported and put an emphasis on the influences of some key parameters on the displacement response. By definition, the transfer matrix method (TMM) develops results from one element to the whole system, and hence suitable for researching dynamic problems characterized by the chain structure. Studying fluid conveying pipe-related dynamic problems by TMM arose decades before, some related achievements include Koo and Yoo (2000) adopted the dynamic stiffness method of the wave approach to construct a transfer matrix and thereafter researched dynamic characteristics of the KALIMER IHTS hot leg piping system. Based on the initial parameter method, Huang, Zeng, and Wei (2002) established a new matrix method to calculate the critical flowing velocity of curved pipe conveying fluid. Yu, Paidoussis, Shen, and Wang (2014) then adopted the same way to study the dynamic stability of periodic straight pipe conveying fluid. Dai, Wang, Qian, and Gan (2012) analyzed the vibration of three-dimensional pipes conveying fluid by TMM based on the wave approach and announced the necessity of the consideration of steady combined force. Li, Liu, and Kong (2014) investigated the fluid-structure interaction behavior of pipelines considering the effects of pipe wall thickness, fluid pressure, and velocity by TMM, expanding its application to optimize supports and structural properties. The core of TMM is state vector, and all different methods commit to found the simplistic format of it. In this paper, a new transfer matrix method based on the Laplace transform is introduced to establish the analytical form of state vector, and then analyze the flow-induced linear vibration of curved pipe conveying fluid, three examples including cantilevered, clampedelastically supported, and periodic cantilevered curved pipes are investigated deeply not only to obtain some discoveries not mentioned in other literatures, but also to demonstrate that the method can be radiated to study other forms of vibration problems concerning fluid conveying pipe or other systems characterized by the chain structure, which sufficiently reveals the advantage of the proposed method. Governing equation A general cantilevered curved pipe conveying fluid is plotted in Figure 1, where w and r separately denote tangential and radial displacement of a random point on the centerline, R is the constant radius of the centerline, is the angle coordinate and θ c is the opening angle of the pipe. As reported by Misra et al. (1988aMisra et al. ( , 1988b, the conclusions based on the modified inextensible and extensible theories for curved pipes with both ends supported regarding stability are more reliable, mainly due to the contribution of steady combined force (also named 'initial axial force'), while for cantilevered one, effect of this force seems less pronounced and hence can be neglected (Wang et al., 2007;Misra et al., 1988a). Therefore, the general linear governing equation (Wang & Ni, 2008) for the centerline of the curved pipe conveying fluid is where, m f and m p denote the mass per unit length of fluid and pipe, respectively, inner fluid flows with constant velocity U modeled by plug flow, EI represents the flexural stiffness, t is the time, m t and c t are the added mass per unit length and the coefficient of viscous damping due to the surrounding fluid, associated with the transverse motion; with m a and c a playing similar roles in the longitudinal motion, 0 represents the steady combined force and it equals to zero for the cantilevered curved pipe while −u 2 for those with both ends supported. Application of the new transfer matrix method on the present problem The solution of Equation (1) can be written as is a dimensionless natural frequency and corresponds to its dimensional form. If a cantilevered pipe is considered, with the substitution of Equation (2) into (1) and the representation of 0 with 0, it will become 1 θ c 6 d 6 y By means of the Laplace transform and after elementary transformation, the result is where s is the transformed variable in the complex domain, y (i−1) (0) represents the (i-1)th derivative of y(0) and The denominator in Equation (4) can be factorized as (5) By means of inverse Laplace transform (Li, Zhao, & Li, 2014), the result will be and λ i,r represents the result substituting s with s r into λ i . Then the nth (n = 0, 1, 2, 3, 4, 5) derivative of y(θ ) is Equation (7) can be reformatted compactly as and According to Equation (2), tangential displacement at the random angle θ can be expressed as If the centerline is assumed to be inextensible, the radial displacement, angle of rotation, bending moment, transverse shear force, and axial force can be formulized as √ βiy(θ ) e iωτ + u 2 (14) Then the state vector at θ is where q = {w, r, ψ, M, Q, N} T , non-zero terms in is only 6 = u 2 EI/R 3 , likewise, in H they are Therefore, It is obvious that Equations (1)-(16) are suitable for the whole pipe, for those pipes possessing inhomogenous factors (including material property, radius of the centerline, geometrical shapes, etc.), the above relationships can be seen as true in each part if the pipe is divided into many short enough elements. Imagine that for the kth element, numbers of its two boundaries are denoted by k and k + 1, and their angle coordinates are θ k and θ k+1 , respectively, according to Equations (8), (16), and (15), the following equations can be obtained, i.e. It is noteworthy that in Equations (17), (18), and (19), subscripts of θ and q denote numbers of node, while those of other variables denote numbers of element. With the substitution of Equation (18) into (17) and then introducing the result into (19), for the kth element, the result can be formulized as where S k = H k T k H k −1 , P k = I − S k , l and r denote left and right directions, respectively. If there exist elastic supports at kth node, then where K and K t separately denote elastic coefficients in translational and rotational directions. According to Equation (21), for kth node, the following equation can be obtained, i.e. where, the subscript of F denotes the number of node, and F ii = 1 (i = 1, 2, 3, 4, 5, 6), F 43 = K t , F 52 = -K, and other values equal to zero. Obviously, F will be a unit matrix with six orders if there are no elastic supports. With the substitution of Equation (22) into (20), it will become With the combination of Equations (22) and (23), for the nth node, the final expression is q n l = F n q n r = F n (S n−1 F n−1 · · · S 1 F 1 q 1 r + S n−1 F n−1 · · · S 2 F 2 P 1 1 + S n−1 F n−1 · · · S 3 F 3 P 2 2 + · · · + S n−1 F n−1 P n−2 n−2 + P n−1 n−1 ) Compactly, Equation (24) can be written as For the cantilevered pipe, the state vectors at its two ends are q n l = {w n , r n , ψ n , 0, 0, 0} T After the combination of Equations (25) Let the determinate of the coefficient matrix in Equation (28) equal to zero will surely obtain the dimensionless natural frequency. If Re(ω) = 0 and Im(ω) changes its sign at some critical velocity, the pipe will lose stability by divergence (u cd is used to denote this critical value hereafter), while, If Re(ω) = 0 and Im(ω) changes its sign, the pipe will lose stability characterized by flutter (u cf is used to denote this critical value hereafter). Verification of the proposed method By reference to Figure 1, if two spring supports are both implemented on the left end and both elastic coefficients are large enough (e.g. K = K t = 10 8 ) so that this end can be seen as completely constrained, then the supporting type will be clamped-clamped, while if K t = 0 and K is large enough, then the problem will transform to a clamped-pinned pipe. If β a = β t = a = t = 0, β = 0.5, K = K t = 10 8 and the steady combined force is neglected, the critical velocity obtained separately by the proposed method and Huang et al. (2002) are shown in Table 1. As Table 1 shows, the results obtained by these two methods agree well with each other, which reveals the validity of the proposed method in calculating the critical velocity. Examples Three examples are introduced in this subsection, i.e. cantilevered, elastically supported, and periodic cantilevered curved pipes conveying fluid, respectively. The focus is not only to study the dynamics of each model but also to show the broad applications of the proposed method. Cantilevered curved pipe conveying fluid A pure cantilevered curved pipe has been plotted in Figure 1, and physical parameters of this piping system can be referred in Table 2. According to Table 2, flexural stiffness EI and mass ratio β can be calculated, separately they are EI = 182.252 N m 2 and β = 0.262. 'Pure' hereby means there are no inhomogeneity and extra supports, it is a homogeneous cantilevered pipe and hence there is no need dividing the pipe. If β a = β t = a = t = 0, then with the aid of the proposed method, Figure 2 shows the former four orders natural frequency as a function of the flowing velocity in the dimensionless domain. According to Figure 2, all modes decline at first, in addition, the 2nd and 4th modes lose stability via flutter type at u cf = 3.140 and 5.714, respectively, as for other two modes, there are no instabilities over the computing range. As mentioned in Section 3.1, change of elastic coefficients of springs at left end will lead to distinct supporting types, it will be an interesting issue if the steady combined force is neglected when clamped-clamped and clamped-pinned curved pipes are studied, and Figure 3 shows the results. All calculated modes decline sharply if 0 = 0 and there exist chances for the pipe to lose stabilities, e.g. for the clamped-clamped curved pipe, it diverges at u cf = 2.360 at its 1st mode, which is certainly not a reliable conclusion compared with Misra's analysis (1988b). For the clamped-pinned pipe, all four modes decrease following the increase of the flowing velocity at first as Figure 3(b) shows and converge to be a single value after u = 5.5, which is also an unreliable conclusion. Elastically supported curved pipe conveying fluid To improve the system's stiffness, elastic supports are often introduced to support the pipe in practice, and hence this model will be studied in this subsection. It is obvious whether the springs are implemented on the free end or not will lead to dramatically different results. Figure 4 shows the mechanical models corresponding to these two cases. Figure 5 shows that the change of the elastic coefficients will lead to different instabilities, e.g. when K = K t = 10, the 2nd and 4th modes lose stability by flutter at u cf = 3.155 and 5.720, respectively. While in Figure 5(b) (i.e. K = K t = 10 2 ), only the 2nd mode flutters at u cf = 3.230. When K and K t increase to 10 3 , the 1st mode will diverge at u cd = 2.673 and the divergence interval is [2.673, 3.042], the 2nd mode flutters at u cf = 3.268. However, as Figure 5(d) shows, if K and K t increase continuously and reach to 10 4 , all calculated four modes occur instabilities via different types, i.e. the 1st and 3rd modes separately start to diverge at u cd = 2.357 and 4.534, meanwhile, the divergence intervals are [2.357, 3.384] and [4.534, 5.354], the 2nd and 4th modes flutter at u cf = 3.300 and 5.379, respectively. It is noteworthy that during the above calculation, the steady combined force is neglected, i.e. 0 = 0, while in fact, the rotating and moving freedoms at the free end are constrained to some extent due to the elastic supports, only that the degree is not 100% therefore, the inextensible theory of the centerline appears not appropriate hereby, omitting this force will lead to results departing from true values, especially, if elastic coefficients are bigger, the degree of deviation will be larger simultaneously. Strictly speaking, the supports at both ends mentioned in modified inextensible and extensible theories are supposed to be rigid (clamped or pinned), for flexible supports (i.e. elastic supports in this subsection), 0 is no longer a fixed value, but a variable with respect to elastic coefficients. Thereupon, it should be limited in an interval, i.e. −u 2 < 0 < 0, as for how to evaluate the specific value, to the best of authors' knowledge, there is no relevant literature over the world till today. To manifest the effect of 0 on the results, with the aid of the present method, real parts of former four modes as a function of the flowing velocity and 0 are calculated and plotted as Figure 6 shows, where K = K t = 10 3 and other parameters are same with before. Things will be extremely distinct if 0 takes different values as Figure 6 shows, e.g. for 0 = −u 2 , there's no instability over the computing range. While for 0 = −0.01u 2 , the 1st mode diverges among [2.689, 3.049], and the 2nd mode flutters at u cf = 3.289. When 0 = −0.1u 2 , the 1st mode diverges among [2.820, 3.205], and the 2nd mode flutters at u cf = 3.503. If 0 = −0.5u 2 , the 1st mode will diverge among [3.781, 4.303], and the 2nd mode will flutter at u cf = 5.222. The results verify that 0 indeed has quite significant influences on the stability of the pipe. If the springs are implemented intermediately, i.e. the mechanical model as Figure 4(b) shows, things will be different. If inhomogeneity is also neglected, K = K t , 0 = 0 and other parameters are same as Table 1, then two elements are needed in total here. Figure 7 shows u cf of the 2nd mode as a function of non-dimensional implementation position θ m and elastic coefficients K and K t . As shown in Figure 7, all curves fluctuate following the increase of θ m , whatever the elastic coefficients are, and they all start from θ m = 0, u cf = 3.140, mainly in that θ m = 0 means there are no elastic supports, leading to a result same with that of the cantilevered pipe. Furthermore, at a random same position, the larger the elastic coefficients are, the larger u cf will be obtained, in addition, there are two local peaks and one valley for each curve, all curves reach valley values around θ m = 0.4 and peak ones around θ m = 0.1 and 0.8. In terms of system's safety, a comparatively larger critical velocity should be a better choice to ensure the pipe can work regularly in a relatively wider velocity range, in this sense, the present method will be of great help in designing elastic supports (including elastic coefficients and their implementation positions), it belongs to an optimization problem, which is of significant sense in engineering practice and deserves a further study. Periodic cantilevered curved pipe conveying fluid Due to the existence of manufacturing errors during production practice, there inevitably exists inhomogeneity in the aspect of material properties in curved pipes, hypothetically, the pipe is divided into finite elements, as long as each element is short enough, the inhomogeneity in the individual element can be approximately neglected. To make things simple, every other element is assumed to have same parameters (including Yong's modulus, moment of initial, density and length etc.), then Figure 8 shows a periodic pipe under this assumption. If N denotes the number of all elements, with N 1 and N 2 denoting that of elements in length n 1 and n 2 , respectively, then N = N 1 +N 2 is naturally founding, in addition, if N is an even number, N 1 = N/2, otherwise, N 1 = (N + 1)/2. λ = n 1 /n 2 is introduced to denote length ratio, and obviously, λ ∈ (0, +∞). λ reaching its two boundaries means the pipe thereby is a cantilevered one, under these two circumstances, the critical velocity should be the same value because it's a dimensionless variable, which is supposed to have nothing to do with dimensional physical parameters and N. If flexural rigidity of the main element (i.e. the element in length n 1 ) is EI 1 = 150 N m 2 , and that of the element causing inhomogeneity (i.e. the element in length n 2 ) is EI 2 = 200 N m 2 , other parameters are: R = 1 m, β = 0.25, θ c = π , then u cf of the 2nd mode versus N and λ can be calculated by the proposed method as Figure 9 shows. According to Figure 9, it is found that all curves originate from the same point, i.e. λ = 0, u cf = 3.082, in addition, with the increase of λ, all curves tend to be gentle, the limiting values converge to 3.082, manifesting that the calculated results anastomose well to the analysis mentioned before, which further verifies the validity of the proposed method. For a specific N, there exist critical length ratios (e.g. λ c = 0.6 for N = 3 and λ c = 2 for N = 5) causing their calculated u cf to equal to 3.082, which means that appropriate combination of N and λ can be found to approximate a pure cantilevered curved pipe. Meanwhile, when N = 5, its corresponding curve fluctuates in the least range among all calculated curves for random λ, meaning the result now is nearest to a pure cantilevered curved pipe and revealing that there's nearly no necessity to divide the pipe into more segments. In conclusion, there are two approaches to obtain roughly the same critical velocity with a cantilevered curved pipe, one is controlling the inhomogeneity as little as possible, the other is N equals to an appropriate value that can be calculated by the proposed method. Discussion As mentioned in Section 3.2.2, for a pipe conveying fluid with one end clamped and the other elastically supported, the initial axial force 0 should be appropriately considered mainly in that the translational and rotation freedoms are partly constrained, but there is difficulty in how to evaluate the force, the only thing we know is −u 2 < 0 < 0, therefore, large quantity of experiments are needed in this sense and the present method will then be of tremendous help in further calculation. During the construction of the proposed method, one thing of crucial importance needs be noted is the problem researched should be linear with respect to time and hence can be separated out, for a nonlinear problem (Lee & Chung, 2002), the proposed method can do nothing about it mainly for that Laplace transform is an effective tool only in solving linear problem, and hence can be used in this paper to calculate position-related solution, based on which the transfer matrix will then be easy to be constructed, according to this, the method will be helpful in solving linear dynamic problems in other fields, thereupon, there leave sufficient spaces for researchers to improve the application range of this method in further study. Conclusions Flow-induced vibration problem of curved pipe conveying fluid is investigated in this paper. With the aid of the Laplace transform, the analytical state vector is formulated, and then its transfer matrix is derived. Three examples including cantilevered, elastically supported, and periodic cantilevered curved pipes conveying fluid are investigated, critical velocity for flutter under these circumstances are calculated by the proposed method. For the fixed-free curved pipe with elastic supports implemented on the free end, if the elastic coefficients increase to large enough, the instability type will be no longer the only flutter, but divergence may appear. For the periodic pipe, controlling N to equal to an appropriate value is a better choice to approximate a pure cantilevered curved pipe in terms of critical velocity in engineering practice. The proposed method takes advantage of the Laplace transform in solving linear problem, and combining the three examples, it can be concluded that it is feasible for the proposed method to study more complex problems, e.g. periodic curved pipe with intermediate elastic supports, changeable radius of the centerline or their combinative problems, or above all aspects for straight pipes conveying fluid. In conclusion, the proposed method is suitable for solving linear problems characterized by the chain structure. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work was supported by the National Natural Science Foundation of China [grant number 51775097].
2019-04-22T13:12:41.596Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "53af1b1c36f12aefb58ce1cd8c83fa5b2882e806", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19942060.2018.1527725?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "73c4a7b18534c0cf69eb9c6ea33f48f63a45d250", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
271135660
pes2o/s2orc
v3-fos-license
Inter-observer and inter-modality concordance of non-contrast MR angiography and CT angiography for preoperative assessment of potential renal donors Background Magnetic resonance angiography (MRA) is rapidly being employed as an effective substitute for CTA, particularly in situations of poor kidney function. We aimed to examine the inter‑observer and inter‑modality reliabil‑ ity of non‑contrast MR angiography (NC‑MRA) and CTA as a non‑invasive tool for assessing the anatomical findings of potential living kidney donors. Results All potential donors were referred from specialized kidney transplantation center and underwent NC‑MRA of the renal arteries using a respiratory‑triggered magnetization prepared 3D balanced steady‑state free precession (b‑SSFP) with inversion recovery pulses and fat saturation (Inhance 3D Inflow Inversion Recovery (IFIR)). Two expe‑ rienced radiologists reviewed NC‑MRA images and were asked to evaluate both renal arteries anatomy and their branching pattern, presence of accessory or aberrant renal arteries, and identify any anatomical variant. Lin’s correla‑ tion test was performed to test MRA readings by each of the two observers against CTA findings which considered as the gold standard for assessment of renal arteries. Additionally, observers were asked to assess the image quality. The study included 60 potential kidney donors (43 males and 17 females) with mean age ± SD of 31.3 ± 5.6 years. Excellent to very good inter‑observer agreement was found between both observers in the assessment of renal arter‑ ies by NC‑MRA. There was perfect concordance between MRA and CTA findings in detecting early arterial division, caliber, and length of left extra‑parenchymal segmental branches. Moderate concordance was found in the assess‑ ment of the supplied segments of extra‑parenchymal segmental renal arterial branches and substantial concord‑ ance between both MRA observers’ findings in the remaining variables of the study. There was excellent agreement between both observers in the assessment of image quality parameters. Conclusions NC‑MRA for the renal arteries is an effective alternative for CTA without the risks of radiation or contrast media. planning, to evaluate the renal anatomy and anomalies [1,2].The gold standard imaging modality of the renal arteries is digital subtraction angiography, as it has the advantage of being diagnostic and sometimes therapeutic in cases of stenosis [3].However, main drawbacks of this technique are being invasive method using ionizing radiation and iodinated contrast agents which are potentially nephrotoxic [2].The use of multidetector computed tomography (CT), having higher temporal and spatial resolution, allowed the acquisition of highquality images, producing results compared to those of digital subtraction angiography in the assessment of renal vasculature and its variants.However, CT angiography (CTA) also uses iodinated contrast agents and ionizing radiation [4,5]. Magnetic resonance angiography (MRA) has been increasingly used as a good alternative to CTA especially in cases suffering from insufficient kidney function, with recent advances in software settings, and improved sequence performance which allowed high-quality noninvasive renal vasculature to be studied without exposing patients to iodinated contrast agents or ionizing radiation [6][7][8].Numerous causes have been reported why non-contrast MRA might be a possible alternative to contrast-enhanced MRA and CTA.The first reason is to avoid possible nephrotoxicity or nephrogenic systemic fibrosis (NSF) secondary to iodinated or gadoliniumbased contrast agents, especially in patients with Stage 4 or 5 chronic kidney diseases (CKD).Furthermore, there are many concerns about gadolinium deposition in the basal ganglia after repeated administration of gadolinium chelates [10].Lastly, contraindication to the use of contrast agents (such as allergy) is of concern.Because all of these concerns, newer non-contrast renal MR angiography techniques become an attractive solution to replace CTA and CE-MRA in assessment of renal vascular anatomy, variants and it shows promising results [11][12][13].Therefore, this study aims to assess the inter-observer and inter-modality reliability of NC-MR angiography as a non-invasive method for evaluation of the anatomical findings of potential living kidney donors in comparison with CTA findings. Study population This IRB-approved study included 60 potential kidney donors.All potential donors underwent MR angiography of the renal arteries without usage of contrast agent or any chemical materials.The results were compared to CTA results that are done as a routine investigation pre-operatively.All the candidates were informed about the examination time, the value of remaining motionless during examination, and knocking sound of MRI machine.CTA was performed before MRA and the interval between two studies ranged from 0 to 2 days. CTA protocol All subjects were assessed using a 128-slice MDCT scanner (Revolution EVO, GE Healthcare 128 detectors, Milwaukee, WI, USA).The scan composed of arterial, venous, and delayed (excretory) phases.After initial scout topography was obtained, non-ionic iodinated contrast agent (Omnipaque, 350 mgI/ml) was injected through a 16-18-gauge cannula at a flow rate of 5 ml/s.Arterial phase was initiated based on automatic bolus tracking (Smart Prep, GE Healthcare).Scanning starts 5 seconds after reaching a threshold of 150 HU in the area of the abdominal aorta.The scanned area extended from diaphragm to symphysis pubis.The main acquisition parameters for arterial phase were: the section thickness of 1.25 mm, intersection spacing of 1.25 mm, tube voltage of 120 kv, tube current range 250-500 mAs, with 0.5-s gantry rotation time. MRA protocol Potential donors fasted for 2-4 h prior to the study in order to reduce fluid secretions within bowel loops and peristalsis.The subjects were positioned on the moveable examination table (feet first).Straps and bolsters may be used to help them to stay still and maintain the correct position during imaging.MRI examinations were performed on a 1.5-Tesla closed MRI unit (Signa Explorer, GE Medical Systems, Milwaukee, USA).Sixteen-channel circular, polarized, phased array body coil is positioned anteriorly and posteriorly over the abdomen; respiratory triggered bellows were applied.Subjects were instructed to breath regularly at normal amplitude during data acquisition.The examination included [1] multi-planner T2-weighted fast field echo (FFE) localizer to locate the region of interest starting from diaphragm to iliac bones with slice thickness 9 mm, [2].Then NC-MRA was performed using respiratory-triggered magnetization prepared 3D balanced steady-state free precession (3D b-SSFP) with inversion recovery pulses and fat saturation (Inhance 3D Inflow Inversion Recovery (IFIR); GE health care).The scanning parameter was TE = 2.7 ms; TR = 5.4 ms; FOV = 110 mm; slice thickness = 0.2 mm; spacing = 0; flip angle = 90°; matrix = 256 × 256.Average scan time was 3.06 min. Image processing The imaging data obtained after the scanning were reviewed on workstation with 2D and 3D capability and multiple editing options (Advantage Workstation 4.7, GE Healthcare).Image reconstruction and post-processing of the NC-MRA source images was performed by two radiologists using maximum intensity projection (MIP) and volume rendering (VR) techniques to produce a coronal image of the entire renal arterial vasculature.The MIP and VR images were magnified and projected at the appropriate viewing angle duo to the small caliber of the renal arteries and their segmental branches. Image analysis and interpretation Two independent radiologists with 13 and 8 years of experience evaluated randomly distributed non-contrast MRA images and comparing its results with CTA results.Both observers were asked to assess the following: (i) renal arteries anatomy, branching pattern and early arterial division, (ii) presence of supernumerary arteries (accessory or aberrant renal arteries), (iii) extraparenchymal segmental branches, and (iv) identification of different vascular anatomical variants.The accessory arteries defined as vessels that enter the kidney together with the main renal artery from the hilum, whereas the aberrant arteries enter the kidney straight from the capsule outside the hilum.They were asked to measure the caliber and length of main renal arteries, supernumerary arteries, and extra-parenchymal segmental branches.The caliber of renal arteries was measured from source images in cross-sectional planes within fixed distance 10 mm from the aorta, expect of one case with very short main renal artery.The length of renal arteries was measured from coronal reconstructed 3D images with manual 3D cursor in workstation measurement tools, compatible with renal arteries tortuosity.Additionally, both observers were asked to grade the image quality based on sharpness, presence of artifacts, and diagnostic acceptability following grading in (Table 1). Statistical analysis Data were analyzed using IBM-SPSS (IBM Corp. Released 2017.IBM SPSS Statistics for Windows, Version 25.0.Armonk, NY: IBM Corp.) and MedCalc Statistical Software version 18.9.1 (MedCalc Software bvba, Ostend, Belgium; http:// www.medca lc.org; 2018).The diagnostic accuracy of NC MRA for determining renal arteries anatomy and variants was correlated with the gold standard CT angiography to calculate the sensitivity and specificity of NC MRA as a single preoperative method for assessment of renal vascular anatomy of living kidney donors and mapping for operation.Quantitative data were expressed as mean ± standard deviation (SD).Non-quantitative data were expressed as frequency [N] and percentage [%].Inter-observer agreement and intermodality concordance for nominal data were assessed by Cohen's kappa (poor < 0.20; fair = 0.21-0.40;moderate = 0.41-0.60;good = 0.61-0.80;very good = 0.81-0.099;perfect = 1.00).Inter-observer agreement and intermodality concordance for ordinal data were assessed by weighted kappa and scale using interclass correlation and Lin's concordance coefficient (poor < 0.90; moderate = 0.90-0.95;substantial = 0.95-0.99;perfect > 0.99). Study population and CTA findings This study included 60 potential kidney donors, 43 males (71.7%) and 17 females (28.3%), with mean age ± SD of 31.3 ± 5.6 years.According to CTA findings, the mean caliber of right renal arteries ± SD = 5.50 ± 1.01 mm, and the mean caliber of left renal arteries ± SD = 5.55 ± 0.9 mm.The median (IQR) distance to the bifurcation of right renal arteries is 32 (25-41) mm, and the median (IQR) distance to the bifurcation of left renal arteries is 28 (23)(24)(25)(26)(27)(28)(29)(30)(31)(32) mm.CTA readings recorded early arterial division within 20 mm distance from the aorta in 15 subjects: eight on the right and seven on the left.The typical bifurcation renal arteries branching pattern was recorded in 56 subjects, and a trifurcation branching pattern in 4 subjects, with three on the right side and one on the left side.All supernumerary (aberrant and accessory) renal arteries arose from the abdominal aorta.Aberrant renal arteries were reported in 14 subjects (Fig. 1), and accessory renal arteries in 6 subjects (Fig. 2).CTA data showed seven proximal extra-parenchymal segmental renal branches in 6 subjects (Figs. 2 and 3).CTA characteristics of the supernumerary and extra-parenchymal segmental renal branches are demonstrated in Table 2. Renal arterial anatomical variants were reported in 5 subjects as follows: left testicular artery arising from the left renal segmental branch, right adrenal and phrenic arteries arising from the right renal artery, left intra-renal aneurysm, right phrenic artery arising from the right renal artery and right pre-caval accessory renal artery.Both observers failed to detect the left testicular artery (Fig. 4) and succeeded in detecting the other anatomical variants in NC-MRA. Quantitative variables assessment and correlations (Table 3) There was a perfect concordance between findings reported by MRA (observer 2) and CTA findings in detecting the caliber and length of left extra-parenchymal segmental branches.MRA (observer 1) findings had perfect concordance with CTA findings in measuring the length of the right aberrant arteries and the length of the left extra-parenchymal segmental branches.There was moderate concordance between both MRA observers' findings in assessing the supplied segments of extra-parenchymal segmental renal arterial branches. There was poor concordance in the assessment of extraparenchymal segmental renal branch, number, side, and right branches caliber and length between two observers, which did not achieve statistical significance because of the very small sample size.According to both MRA observers, there were two extra-parenchymal segmental renal arterial branches on the right side, while CTA readings revealed three branches.There was substantial concordance between both MRA observers' findings in the remaining quantitative variables of the study.There was almost perfect agreement between MRA observers and CTA readings in measuring accessory renal arteries and left extra-parenchymal segmental branch length.There was good agreement in assessing the right extra-parenchymal segmental branches caliber and length between two MRA observers and CTA readings.All other remaining quantitative variables showed a very good agreement between the two MRA observers and each MRA observer and CTA readings. Non-quantitative variables assessment and correlations (Table 4) There was a very good agreement in assessing early arterial division and extra-parenchymal segmental branches number between MRA observers vs. CTA readings and a good agreement in the assessment of extra-parenchymal segmental branches (side and supplied segment) between both MRA observers vs. CTA findings and in the evaluation of other anatomical variants between both MRA observers vs. CTA findings.For other nominal variables, a perfect agreement was found between both MRA observers, each observer, and CTA findings. Qualitative assessment of MRA images There was excellent agreement between both observers in the assessment of image quality parameters.There was one (1.7%)case with poorer sharpness than average, according to (observer 1) scoring, and two (3.3%) cases, according to (observer 2) (Table 5a).Regarding imaging artifacts, both observers listed only one case (1.7%) with imaging artifacts that affected image quality & interpretation (Table 5b).According to both observers, two (3.3%) cases had suboptimal diagnostic acceptability (Table 5c). Sensitivity and specificity In the comparison of non-contrast MR renal angiography anatomical findings in our study with the reference standard CT renal angiography, the sensitivity and specificity were calculated as the following: early division (93.3% and 100%), extra-parenchymal segmental branch (85.7% and 100%), and for other anatomical variants (83.3% and 100%). Discussion In donor transplantation cases, angiography is crucial to assess the renal vasculature.Multiple accessory renal arteries or early branching may become a challenge for transplantation and could result in severe complications and even transplant failure [13,14].CTA is considered the gold standard for the preoperative assessment of renal donors.However, the main drawbacks of CTA are exposure to nephrotoxic iodinated contrast and ionizing radiation [15].NC-MRA is an attractive solution to avoid radiation exposure and contrast administration [16].Recent studies have evaluated the NC-MRA technique to assess renal artery stenosis [12,[17][18][19]] and vascular anatomy in potential renal donors and have produced promising results [2,9,[20][21][22][23][24][25].Nevertheless, the diagnostic accuracy of NC-MRA was not validated by CTA in all these studies.NC-MRA was compared to contrastenhanced MRA in some studies [2,17,18,20] or operative results [20,23] or DSA as reported by Gue et al. [19].Furthermore, the current study included a relatively large number of donors (n = 60); 120 kidneys were examined.They showed 144 renal arteries, 120 main, and 24 supernumerary renal arteries. The current study investigated the inter-observer and inter-modality concordance of non-contrast MR angiography using Inhance Inflow Renal MRA, 3D SSFP sequence, and CT angiography for the preoperative assessment of potential renal donors.Our results showed excellent inter-modality concordance between traditional CTA and NC-MRA in detecting the most common anatomical variant, such as supernumerary renal arteries, their number, origins, and supplied renal poles.Readers successfully identified all main renal and accessory arteries in the study population.In concordance with our results, Patil et al. [24] reported a very good interreader agreement for supernumerary arteries (K = 0.97) and early branching (K = 0.88) on CTA and NC-MRA.Blankholm et al. [9] reported excellent agreement in the ostium and proximal segment caliber measurements by NC-MRA compared to CTA for both readers.These results are similar to the current study results regarding renal arteries caliber assessment between NC-MRA and CTA and between both observers.In another study compared the diagnostic accuracy of NC-MRA versus CTA for assessment of renal artery stenosis, there was an average of 6% variation in the measured percentage of renal artery stenosis between the two readers and explained that this variability occurred for both NC-MRA and CTA likely due to the modest linear correlation was seen between CTA and MRA [12]. Both observers in the current study detected all anatomical variants of renal arteries in NC-MRA apart from abnormal left testicular artery origin from the left kidney.Both observers detected the left testicular artery in NC-MRA after the revision of CTA.The testicular artery variations are relatively rare, ranging from 0.4 to 14%, and maybe with respect to their number, origin, or course.Testicular arteries may originate from the aorta at an abnormal level or renal, suprarenal artery, or any one of the lumbar arteries [26].This study included other unusual branches from the renal artery, such as the phrenic and adrenal arteries.We advise radiologists assigned to read preoperative scans for renal donors to know the detailed anatomy of the renal arteries and their branches to avoid such interpretation errors. Although readers' confidence was slightly lower for NC-MRA images in the current study, image quality was more than acceptable in most cases.Similarly, Parienty et al. reported that the image quality of NC-MRA using the 3D b-SSFP technique was good in 87% and moderate in 13% of images using a 3-point scoring system, with good, moderate, and poor scores [27].In another study assessed the vascular visualization quantitatively, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of both renal arteries in SSFP MRA were all higher than those measured by CT angiography, and the differences were statistically significant (p < 0.001) [25]. In a study that included 40 subjects by Goetti et al. [20], NC-MRA's sensitivity, specificity, and accuracy were 100%, 89%, and 91%, respectively, compared to CE-MRA.They reported several technical advantages of NC-MRA over CE-MRA.First, problems with early parenchymal enhancement or venous contamination related to contrast agent bolus timing do not occur with NC-MRA.Second, respiratory triggering with NC-MRA allows subjects to breathe continuously during data acquisition and avoid motion and breathing artifacts commonly encountered with CE-MRA.Lastly, the higher in-plane resolution of the NC-MRA compared to the CE-MRA improves the delineation of small-caliber accessory renal arteries. According to Blankholm et al. [9], CTA and MRI showed a specificity and sensitivity of 100% in detecting if there were > 1 artery compared with observations from nephrectomy.Another study concluded that unenhanced MRA, in comparison with CTA, showed high sensitivity (72.7-100%), specificity (96.3-100%), and overall accuracy (> 90%) for the identification of multiple arteries, with an excellent inter-observer agreement, and this could lead to establishing NC-MRI as an alternative to CTA for evaluating kidney donors.These findings are in concordance with current study results in our study with the reference standard CT renal angiography; the sensitivity and specificity were calculated for detection of early division (93.3% and 100%), extra-parenchymal segmental branch (85.7% and 100%), and other anatomical variants (83.3% and 100%).Generally, the best NC-MRA technique used for the evaluation of renal arteries is IFIR.Eleven studies used IFIR at 1.5 Tesla scanners in a total of 527 patients have reported a median sensitivity of ≈88% and a median specificity of ≈95% compared to either CEMRA, DSA, or CTA as the reference standard examination (evidence level 1b) [28].Recently spatial labeling with multiple inversion pulses technique (SLEEK) has been introduced for one-step assessment of renal function and vascular anatomy.In a study included 78 patients with or without chronic kidney disease and using the SLEEK technique, the performance of SLEEK to display the renal artery was highly consistent with the results of CTA (kappa = 0.713, 95% CI, 0.413-1.000)[29].One of the critical limitations while using NC-MRA is the positioning of the 3D volume slab (i.e., limited craniocaudal volume coverage per slab in a single acquisition), which potentially may result in missing small accessory arteries arising from pelvic vessels [18].This limitation did not occur in our study using 11-cm craniocaudal axis coverage.The current study has a few limitations.First, the renal venous anatomy was not assessed by NC-MRA.A recent study used SSFP-MRA to assess the renal artery and phase contrast MRA to assess the renal vein in potential donors.There was no significant difference regarding the vessels' length measured by MRA (p > 0.05); however, the diameter of the renal vessels measured by MRA was slightly smaller than that measured by CTA [25].Secondly, the potential donors in the current study were examined on a 1.5-T scanner.We recommend further studies comparing the diagnostic accuracy of NC-MRA performed on 1.5-T and 3-T scanners. Fig. 1 Fig. 1 Right aberrant renal artery in a 37-year-old male potential renal donor.a, b Coronal MIP and VR-processed NC-MRA images.c, d Coronal MIP and VR-processed CTA images Fig. 2 Fig. 2 Right accessory renal artery (arrow heads) and early division on left side into inferior segmental branch (red curved arrows) in a 23-year-old male potential renal donor.a, b Coronal MIP and VR-processed NC-MRA images.c, d Coronal MIP and VR-processed CTA images Fig. 3 Fig. 3 Early arterial division into right extra-parenchymal apical segmental branch in a 19-year-old male potential renal donor.a, b Coronal VR and MIP-processed NC-MRA images.c, d Coronal VR and MIP-processed CTA images Fig. 4 Fig. 4 Left testicular artery arising from left inferior segmental renal artery in a 29-year-old male potential renal donor.a, b Coronal MIP and VR-processed CTA images Table 1 Qualitative grading score of renal MRA images Table 2 CTA characteristics of the supernumerary and extra-parenchymal segmental branches Table 3 Inter-observer agreement between two observers for quantitative variables ICC, Interclass correlation coefficient; CI, confidence interval
2024-07-14T16:01:17.125Z
2024-07-09T00:00:00.000
{ "year": 2024, "sha1": "b7b551a88589ce8371084f9594ccad04cd53f76f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s43055-024-01307-x", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "733213aeca6f068128158ce4c160c609bae03c52", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118660406
pes2o/s2orc
v3-fos-license
Experimental limits on the fundamental Planck scale in large extra dimensions I present an up to date set of limits on the fundamental Planck scale M_D. The best limit for each number of extra dimensions n is shown in bold font. For n = 2, M_D>5.6 TeV; n = 3, M_D>4.4 TeV; n = 4, M_D>3.9 TeV; n = 5, M_D>3.6 TeV; n = 6, M_D>3.3 TeV; and for 60.8 TeV. Introduction Limits on M D or R have been set by direct gravity measurements, experiments at accelerators, and constraints from astrophysics and cosmology. The astrophysical and cosmological limits are high, particularly for two or three extra dimensions. However, they are based on a number of assumptions so the results are only order of magnitude estimates. Thus, I will not consider further astrophysical or cosmological limit. Direct gravity measurements The most straightforward observable effect of the large extra dimensions is the modification of Newton's gravitational attraction law at very short distance. Gravity measurements are sensitive to the largest extra dimension. The Eöt-Washington group constrain the size of the largest extra dimension to R ≤ 44 µm at the 95% confidence level [1]. This completely rules out TeV-scale gravity with one large extra dimension. For two large extra dimensions, they obtain M * ≥ 3.2 TeV. The PDG transforms this into the limit R < 30 µm, which corresponds to M D > 4.0 TeV in the case of n = 2. The sensitivity to three extra dimensions of equal size is only M D > 4 × 10 −3 TeV. Limits from accelerator experiments The HERA experiments have set limits on the Kaluza-Klein ultraviolet-cutoff scale but not on M D . I thus consider only the results from the LEP, Tevatron, and LHC collider experiments. In e + e − processes with real graviton emission, the cross section is directly sensitive to the number of extra dimensions and the fundamental scale of gravity. Virtual graviton exchange is sensitive to the ratio λ/M H . M H is an ultraviolet-cutoff scale, which is not equivalent to M D -but should be of the same order of magnitude -and λ is a coupling constant, which depends on the underlying theory of gravity. In pp collisions, direct graviton emission also depends on M D , while virtual graviton exchange does not depend on M D . The dependence on the ultraviolet-cutoff is more complicated but the ideas are similar. Tevatron results I consider only the direct graviton emission searches from Run II of the Tevatron ( Table 1). The latest CDF search is in jets plus missing transverse energy final states [6]. They use a K-factor (ratio of cross sections as calculated at the next-to-leading order and leading order) of 1.3. The latest DØ search is in mono-photon and missing transverse energy final states [7]. The K-factor is include in the uncertainties. LHC results I consider only the direct graviton emission searches from the LHC experiments (Table 1) Table 1. Black hole searches ATLAS and CMS have searched for direct black hole production. The limits on M D from the searches are largely model dependent. In the case of classical black hole models, the limits on M D depend on the threshold production mass M th , as well as n. CMS has also set such limits for quantum black hole production models using di-jet events [12]. In models of quantum black hole production, ATLAS has taken the threshold mass as M D , and searched in di-jet events [13]. Since the models are speculative, I do not consider them as giving limits on M D .
2015-04-06T15:32:05.000Z
2012-10-22T00:00:00.000
{ "year": 2012, "sha1": "eb531bc7baf70cdfd5589c8f0a02506e9b70dcde", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eb531bc7baf70cdfd5589c8f0a02506e9b70dcde", "s2fieldsofstudy": [ "Physics", "Medicine" ], "extfieldsofstudy": [ "Physics" ] }
39531547
pes2o/s2orc
v3-fos-license
Characterization and functions of beta defensins in the epididymis. The epididymal beta-defensins have evolved by repeated gene duplication and divergence to encode a family of proteins that provide direct protection against pathogens and also support the male reproductive tract in its primary function. Male tract defensins also facilitate recovery from pathogen attack. The beta-defensins possess ancient conserved sequence and structural features widespread in multi-cellular organisms, suggesting fundamental roles in species survival. Primate SPAG11, the functional fusion of two ancestrally independent beta-defensin genes, produces a large family of alternatively spliced transcripts that are expressed according to tissue-specific and species-specific constraints. The complexity of SPAG11 varies in different branches of mammalian evolution. Interactions of human SPAG11D with host proteins indicate involvement in multiple signaling pathways. Introduction Defensins emerged from our studies on epididymisspecific proteins in which we were seeking novel male contraceptive targets. Among the candidate targets, the epididymal protease inhibitor Eppin was shown to be a successful reversible male immunocontraceptive in macaques [1]. The first defensin discovered in this program was given the clone name ESC42, and its trefoillike motif was described [2]. Trefoil proteins are important in host defense; they maintain mucosal integrity and influence defensin and adaptive immunity gene expression [3]. After this motif was recognized as the βdefensin signature, ESC42 was named β-defensin 118 (DEFB118). DEFB118 is a member of a large family of genes clustered primarily on human chromosomes 6, 8 and 20 (Figure 1) [4][5][6][7][8][9][10][11]. Defensins have evolved by repeated gene duplication and divergence, including functional diversification [12]. Except for the 6-cysteine domain, rich in positively charged amino acids, defensins differ considerably in their amino acid sequences and target pathogen specificity [4]. A similar cysteine array is found in some lectins [13] and antibacterial protease inhibitors, including the contraceptive target Eppin [14], and secretory leukocyte protease inhibitor [15] (Figure 2). Ancient guards against pathogen invasion, lectins and protease inhibitors are also important in plant host defense [16]. β-defensin primary sequences and functions Beyond the 6-cysteine signature motif, the simplest β-defensins have little additional sequence ( Figure 2) and fall in the molecular weight range of 5-10 kDa. These simple defensins, such as human DEFB1 and DEFB4 (hBD2), are related to defensins in lower animals, including fish [17] and insects [18]. Similar defensins are produced in plants, particularly in the reproductive structures (flowers and seeds) [16]. Male reproductive tract defensins are known only in mammals. These defensins may be as large as 18 kDa (human DEFB129) and often have long N-terminal or C-terminal extensions, generally of unknown function. Reproductive functions are suggested by the sperm surface location of several defensins, including SPAG11 [19,20], DEFB118 [2] and DEFB126 [21,22]. Reproductive functions have been reported for rat SPAG11E (Bin1b) [23] and for DEFB126 [21,22]. Bin1b promotes motility in immature spermatozoa from the caput epididymidis by a mechanism dependent on calcium uptake [23]. The long C-terminal domain of DEFB126, rich in threonine and serine, is highly Oglycosylated. A major component of the sperm glycocalyx [24], DEFB126 is shed during capacitation [22], a loss prerequisite to spermatozoa binding to the zona pellucida [21]. The highly anionic C-terminus of DEFB118 is not thought to have a role in antibacterial action [25], which typically depends on cationic amino acids. The male reproductive tract DEFB123 has a novel function, protection against endotoxemia through restoration of normal tumor necrosis factor-α levels [26]. Structures of β-defensins and similar proteins Structurally, β-defensins typically contain an N-terminal alpha helical domain joined by a disulfide bond to a 2-strand or 3-strand beta sheet stabilized by additional disulfide bridges. The similarity of this fold in human proteins hBD1 [27], SPAG11E [28], in bovine SPAG11C [29] and the human intestinal trefoil protein 3 [30] is shown in Figure 3. The fungal, insect, and plant defensins shown are strikingly similar to a scorpion neurotoxin that shows sequence homology with the male reproductive tract defensins DEFB118 and DEFB126 (identified as GenBank AA335178 and ESP13.2 in [31]). Their cysteine stabilized configuration might represent evidence of broad application of independently evolved structures to common features of host defense challenges [32], or might be evidence of ancient origins of the β-defensins conserving similar domains throughout the animal and plant kingdoms. The SPAG11 gene is a fusion of two β-defensin genes Unique among the β-defensins, human SPAG11 represents the functional fusion of two ancestrally independent β-defensin genes [33] (Figure 4). Alternatively spliced transcripts are initiated at both promoters. Tran-scripts initiated at the A promoter may end after exon 3 or may continue past the poly A addition site, presumably a weak termination signal, and continue through the B promoter and the B exons. Species-specific exons are reported for human, monkey and bovine SPAG11 [29,[33][34][35]. There are fewer bovine mRNA splice variants (only six) than primate variants [29]. Several of the bovine splice sites are in the 3'-untranslated regions, where they may affect mRNA stability. There are three bovinespecific exons. The rat SPAG11 gene is simpler than that in primate and bull and retains the original separate function of the A and B components. There is only one splice site and it is in the A component. No species-specific exons are found in rats [36]. Read-through transcription have to has not been reported for any other pair of defensin genes. SPAG11 proteins Translation of these alternatively spliced RNAs produces a complex protein family. Immunohistochemical staining has revealed the presence of multiple SPAG11 isoforms in the epithelial cells of the epididymis, showing that these mRNAs are actively translated [20,29,36]. Most primate SPAG11 proteins contain the N-terminal common region joined to C-terminal peptides encoded by different combinations of exons ( Figure 5). Multiple reading frames are utilized. For human SPAG11A, exon 6 transcripts are translated in one reading frame, in a second reading frame for the D isoform and for the Rhesus macaque J isoform in the third reading frame. Why SPAG11 evolved these special features is not known. Perhaps it is for the same reason that families of alternative splice variants operate where discriminative protein association is crucial in immunity [37,38], neuronal function [39,40], hearing [41], olfactory detection [42] and fertility [43]. Families of proteins containing different combinations of peptides can have different but overlapping sets of molecular recognition properties and, therefore, overlapping sets of interacting partners that might be of host and/or pathogen origin. SPAG11 mRNA splicing is regulated by tissue-specific and species-specific mechanisms that have led to the suggestion that different combinations of isoforms more effectively kill the pathogens in different organs [29]. Alternatively, different combinations of isoforms might be required for specific male reproductive functions. SPAG11 sequence conservation in different species Alignment of amino acid sequences of the defensinlike SPAG11C, and E isoforms using CLUSTALW [44] reveals exon-specific rates of evolutionary divergence ( Figure 6). There is strong sequence conservation indicated by the black shading in the defensin regions of SPAG11C and SPAG11E,whereas the N-terminal com-mon region shows broad sequence diversity [29]. This region is sometimes called a propiece. The lysine-arginine cleavage site for a furin-like prohormone convertase has been identified in this propiece in humans [45] and is conserved in all species except horses. All of the SPAG11 sequences found thus far are in mammals. Functions of SPAG11 isoforms The N-terminal common region has antibacterial activity, although it lacks a defensin motif [46]. Each of the full length human, rhesus and bovine SPAG11 proteins tested as well as the C-terminal peptides of human SPAG11 A, D and G show antibacterial activity against Escherichia coli [28]. In addition, the C-terminal peptide of SPAG11A kills Niesseria, Enterococcus and Staphylococcus [47]. However, the C-terminal peptides of Figure 6. Alignment of SPAG11 C and SPAG11E proteins from different mammalian species. Amino acid sequences translated from human exons 2, 3 and 6 and their orthologs in other species were aligned using CLUSTALW at http://www.ebi.ac.uk/clustaw. Highlighting is based on conservation symbols (* . :) determined by CLUSTALW and indicated at the bottom of each alignment. Black highlighting indicates 100% conservation, dark grey indicates highly similar substitutions and light grey indicates lower similarity substitutions. GenBank accession numbers are given in Table 1, where "Not found" indicates sequences not yet found in GenBank by Blast searching. Tel: +86-21-5492-2824; Fax: +86-21-5492-2825; Shanghai, China human and rhesus SPAG11C, and rhesus SPAG11K and SPAG11L lack antibacterial activity [46]. SPAG11 isoforms and other defensin-like proteins of the male tract kill E. coli by a membrane disrupting mechanism that has been measured within minutes of contact with the recombinant SPAG11 proteins using fluorescent probes specific for the outer and inner bacterial membranes [14,25,28,48]. SPAG11 and other proteins also inhibit bacterial macromolecular synthesis [46,48]. Damage to the bacteria can be visualized by scanning electron microscopy [14,25,46,48]. E. coli exposed to different SPAG11 peptides shows a range of responses, including shrinkage, loss of cell contents, especially at the division septa, and knob-like distortions (Figure 7). The rapid mechanism of β-defensin bacterial killing is illustrated in Figure 8. Defensin proteins might be initially randomly distributed around a bacterium, but rapidly begin to bind the negatively charged bacterial surface. Membrane disruption assays have shown that within 30 s, the outer membrane is damaged and within a few minutes, the inner membrane is also disrupted [25,48]. Defensins interfere with macromolecular synthesis by destroying the outer and inner membrane barriers and/or by entering the cell [25,48]. Scanning electron microscopy shows that 30 min of treatment results in the release of cell contents. Bacteria that are unable to seal these pores are not likely to survive. In the homology model of the SPAG11D defensin domain, conserved residues (light grey) [49] and additional basic residues (dark grey) form a potential protein interaction domain (Figure 9). The possibility that a pro- Table 1. GenBank accession numbers for the SPAG11 sequences aligned in Figure 6. "Not found" indicates sequences not yet found in GenBank by Blast searching. Names GenBank tein receptor for SPAG11D on sperm might bind this region prompted us to look for interacting partners. In recent studies, using yeast 2 hybrid screening, we identified a number of epididymal proteins that interact with the full length mature human SPAG11D protein in yeast, but not with the amino terminal common region alone (Radhakrishnan et al., unpublished data). Each of these proteins has a role in male fertility that potentially could be modulated by interaction with SPAG11D. Further studies on the interactions of SPAG11 isoforms with epididymis and sperm surface proteins should lead to a better understanding of the full range of male reproductive functions of these antibacterial proteins. Conclusion The β-defensin proteins are involved in innate immunity and male reproductive functions. Evolutionary conservation of the β-defensin fold in animal and plant kingdoms attests to the broad success of this paradigmatic structure in promoting species survival. Multiple interacting partners of SPAG11D suggest involvement in host signaling pathways.
2017-10-27T22:34:10.692Z
2007-07-01T00:00:00.000
{ "year": 2007, "sha1": "fcce4dd7c92d0a7116422d7fc78bb29657a9e0b4", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1111/j.1745-7262.2007.00298.x", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2fd89d4a6cf02f7581eec9162b5ea81190f0b33c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
239991404
pes2o/s2orc
v3-fos-license
Aortic Dissection Aortic dissection remains one of the rare but life-threatening causes of chest pain presenting to the emergency department. High index of suspicion is required for prompt diagnosis of the cases presenting to the ED. Symptoms may vary with extent and the progression of the dissection and may further compli-cate the diagnosis. Thus, patients may present with features of acute MI, CVA, or other end-organ ischemia. Hypertension at presentation may be an important clue for diagnosis of underlying dissection. In low risk patients, D dimer may become a useful screening tool. In patients with high index of suspicion, the choice of investigation will depend on the overall stability of the patient and extent of end-organ ischemia. Stable patients may benefit from CT angiography due to its widespread availability and speed of acquisition. Diagnosis may be challenging for hemodynamically unstable patients in centers where the resources are limited. Transesophageal echocardiography may provide diagnosis in such patients at bedside or in the emergency department. Prompt investigations are required to accurately define the type and extent of damage so that the patient receives life-saving measures in a timely manner. Newer classification With advancement of treatment approach in cases previously managed conservatively, especially with endovascular interventions, recent classifications have taken into consideration other factors like, the duration of symptom onset, evolution of complications, and extent of involved segments. Dissect classification The DISSECT classification system is a mnemonic-based approach with relevance to the therapeutic considerations, including endovascular management. The six features of aortic dissection include duration of disease, intimal tear location, size of the dissected aorta, segmental extent of aortic involvement, clinical complications of the dissection, and thrombus within the aortic false lumen [1]. Penn ABC classification The Penn ABC classification further divides type A aortic dissection based on evolution of complications: • Aa-Absence of branch vessel malperfusion or circulatory collapse • Ab-Branch vessel malperfusion with ischemia • Ac-Circulatory collapse with or without cardiac involvement • Abc-Both branch vessel malperfusion and circulatory collapse (localized and generalized ischemia) Classification based on duration of symptoms More recently type B aortic dissection has been classified based on duration of symptom onset [2]: • Acute-less than 2 weeks • Subacute-2 weeks to 92 days • Chronic->92 days Pathophysiology The pathophysiology of AD involves the breakdown of the intima and/or the media. The initiating event is an intimal tear. Less commonly rupture of the vasa vasorum may be the initiating event. Initial tear is commonly at the site of greatest hydraulic stress which are the right lateral wall of the ascending aorta in about 50-65% of the cases and the proximal segment of the descending aorta (20-30%) [3]. Subsequently, intramural extension of the bleeding both longitudinally and circumferentially causes the separation of aortic wall layers creating a true lumen and a false lumen. A further intimal tear may create a communication between the false lumen and true lumen. The dissection can extend in antegrade or retrograde Differential Diagnosis of Chest Pain directions from the site of origin leading to complications including acute aortic insufficiency, cardiac tamponade, and organ ischemia and with disruption of the adventitial layer may lead to aortic rupture. Intramural hematoma (IMH) is characterized by bleeding confined to the medial layer with no intimal tear visualized by current imaging studies. Rarely ulceration of an atherosclerotic lesion penetrating to the medial layer may give rise to a penetrating aortic ulcer with similar consequences as of AD. Predisposing conditions Factors that increase the risk of aortic dissection in a person's life include the following; Age and sex Aortic dissection tends to occur more in men between 60 and 80 years old, whereas women are generally older than men having aortic dissection [5]. However both men and women can develop these conditions at any age, but in females the outcome is worse. On the other hand, familial aortic dissections occur in younger patients as compared to sporadic aortic dissection [6]. Hypertension Systemic hypertension is the most important predisposing condition for aortic dissection. It could be either an acute, transient, abrupt rise in blood pressure leading to aortic dissection by various mechanisms like strenuous resistance exercises, weight lifting or illicit use of drugs like cocaine, egotism, and energy drink usage [7,8]. While chronic or long-term hypertension keeps greater pressure on atherosclerotic arterial walls, leading to intimal tear and aortic aneurysm. Genetic disorders People having specific genetic conditions have a higher incidence of aortic dissection, like Marfan's syndrome, Turners syndrome, Ehlers-Danlos syndrome, annuloaortic ectasia, adult polycystic kidney disease, Noonan syndrome, and osteogenesis imperfecta. Mostly patients with Marfan's syndrome who develop aortic dissection are young, around 40 years old, and have family history of Marfan's syndrome and aortic dissection [9]. Bicuspid aortic valve Bicuspid aortic valve usually leads to aortic dissection of ascending aorta, because of severe loss of elastic fibers in the media wall. Patients with bicuspid valves associated with aortic dissection are younger below 40 years age [10]. Coarctation of aorta The most common area for congenital coarctation of aorta is the site of ductus arteriosus, where the aorta is focally narrowed. That area is usually underdeveloped, hypoplastic, and small, affecting the layers of the aorta, and increases the risk for aortic dissection. Inflammatory or infectious conditions Inflammatory or infectious diseases that lead to vasculitis (like giant cell arteritis, rheumatoid arthritis, takayasu arteritis, syphilitic aortitis, etc.) affect the vaso vasorum or small arteries that supply blood to the aortic wall [11]. When these small arteries are compromised, they lead to the ischemic injury to the aortic wall and predisposes to aortic dissection. For example, in tertiary syphilis, inflammation begins at adventitia of the aortic arch leading to obliterative endarteritis of the vasa vasorum, luminal narrowing, ischemic injury of medial aortic arch, and finally loss of elastic support and vessel dilatation. Blunt chest trauma The aortic area most commonly involved in blunt chest trauma is the proximal descending aorta, due to its relative mobility over fixed abdominal aorta which is held in place by ligamentum arteriosum. Usually an acute deceleration injury in motor vehicle accident leads to aortic rupture or dissection. Aortic instrumentation or previous heart surgery Cardiac surgery or instrumentation for coronary or valvular heart diseases can be complicated by aortic tear, abnormal dilatation of aorta, and risk for aortic dissection [12]. Pregnancy and delivery Both pregnancy and delivery are independent risk factors for aortic dissection [13], but with the presence of other connective tissue diseases like Marfan's syndrome or bicuspid aortic valve, the risk usually multiplies. In pregnancy, aortic dissection occurs most commonly in third trimester due to hyper dynamic metabolic state and hormonal effects on the vasculature. Differential Diagnosis of Chest Pain 6 Fluoroquinolone usage Some observational studies relate an increased association of aortic dissection or aneurysm with fluoroquinolone usage [14]. Clinical features of aortic dissection The signs and symptoms of aortic dissection depends upon the extent of dissection and compression of adjacent vascular structures. Chest pain The most common symptom is severe pain of sudden onset, described by patient as sharp stabbing or tearing type. When pain localized to anterior chest wall, neck, or jaw, the point of origin of the aortic dissection is from the ascending aorta, and when it is localized to the interscapular area, abdomen, and back, the descending aorta is usually involved. Pain that is localized to the abdomen must raise the possibility of involvement of the mesenteric artery. In few cases the patient may present with pleuritic pain if pericardial hemorrhage occurs. Dissection may present rarely without pain only and mostly in older patients in cases that involve the ascending aorta. Such patients also have more instances of stroke, heart failure, and syncope. Syncope Usually happens in aortic dissection presenting with cardiac tamponade or brachiocephalic vessel involvement and occurs in up to 10% of patients. Hypertension Seen in 30% of type A and 70% of type B disease. Hypotension Seen with ascending aortic dissection and may be due to aortic rupture leading to carotid tamponade (more in females), acute aortic regurgitation, acute MI, hemothorax ,or hemoperitoneum. Transient pulse deficits This results from the intimal flap or hematoma blocking or compressing the artery. It is common in dissection involving the aortic arch and thoracic and abdominal aorta. Patients who presented with pulse deficits had more chances of having hypotension, coma, or neurological deficits. Such patients also had higher rates of complications and mortality. DOI: http://dx.doi.org/10.5772/intechopen.89210 Cardiac murmurs Aortic dissection involving the aortic valve results in aortic regurgitation and an early diastolic murmur in the Erb's point (Austin Flint murmur). It occurs in about 50-75% of all ascending aortic dissections. Focal neurological deficits Occurs when the aortic dissection involves the proximal branch arteries and compression of adjacent structures. The deficits may be: • Stroke/altered consciousness Work-up/diagnosis In chronological order as the patient is admitted into ER with a clinical picture suggestive of aortic dissection: Electrocardiography ECG changes may mimic acute cardiac ischemia, which make it further difficult to distinguish aortic dissection from acute myocardial infarction in the presence of chest pain. If the dissection involves the coronary ostia, the right coronary artery can be affected, which will lead to ST segment elevation in a similar pattern to inferior wall infarction. In most cases, there will be non-specific ECG changes, or ECG can be normal. Blood investigations CBC: there may be leukocytosis due to stress state. Low hemoglobin and hematocrit suggests bleeding (dissection is leaking or has ruptured). Elevated creatinine and BUN may indicate involvement of renal arteries (in such scenario you would expect hematuria, oliguria, or anuria), or it may indicate dehydration due to pre-renal blood loss (dissection is leaking or has ruptured). Differential Diagnosis of Chest Pain 8 Troponin I and T may be elevated if the dissection has involved the coronary arteries and caused myocardial ischemia. LDH (lactate dehydrogenase) may be elevated due to hemolysis in the false lumen. D dimer: high negative predictive value. Aortic dissection is less likely if D dimer is negative [15]. Chest radiography Widening of mediastinum is the classic finding (approximately 60% of cases), but it may not reveal any abnormality. In any case it should not delay the performance of further imaging as CT or MRI [16]. A tortuous aorta (common in hypertensive patients) may be mistaken for widened mediastinum; other differential diagnoses for widened mediastinum include enlarged thyroid, lymphoma, tumors, and adenopathy [17]. Hemothorax is expected to be seen as blood can accumulate in pleural space following dissection rupture. Ring sign (aortic displacement more than 5 mm past the calcific aortic intima) and abnormal aortic contour can be seen in some patients. Other radiological abnormalities can be seen including esophageal deviation, tracheal deviation to the right, depression of left mediastinal bronchus, left apical cap, pleural effusion, and loss of paratracheal stripe. TEE is as accurate as CT and MRI, and it can be used at bedside which makes it suitable for hemodynamically unstable patients. • Difficulty in obese patients. • Not suitable for patients with esophageal stenosis or varicosities. • Narrow intercostal space, pulmonary emphysema, and mechanical ventilation decrease its accuracy. • Upper ascending aorta and arch may not be evaluated well. • False-positive results may occur due to reverberations in the ascending aorta [19]. Computed tomography CT with contrast is used more frequently in emergency department settings, only on hemodynamically stable patients who do not have adverse reaction to the intravenous contrast agents. A 2014 guideline form American College of Radiology recommends CT angiography as the definite modality if there is high clinical suspicion for aortic dissection DOI: http://dx.doi.org/10.5772/intechopen.89210 [16]. CTA provides detailed anatomic definition of the dissection and information about plaque formation. Spiral (helical) CT is associated with higher rate of detection and better resolution than incremental CT scanning. Imaging information, including type of the lesion, location of pathologic lesion, extent of the disease, and evaluation of the true and false lumen can be assessed quickly and help the surgeon plan the operation [16,20]. Limitations of CT: • Cannot provide information about aortic regurgitation. • CTA not suitable for patient with renal impairment or allergy to contrast material. • Hemodynamic unstable patients cannot be shifted to radiology department. Smooth-muscle myosin heavy-chain assay Performed in the first 24 hours. Levels are higher in the first 3 hours; a 2.5-fold increase has a sensitivity of 91% and specificity of 98% for aortic dissection. Measurement of the degradation products of plasma fibrin and fibrinogen Plasma fibrin degradation product level (FDP) of 12.6 μg/mL or higher is suggestive of the possibility of aortic dissection with false lumen in symptomatic patient. Plasma fibrin degradation product level (FDP) of 5.6 μg/mL or higher is suggestive of the possibility of dissection with complete thrombosis of the false lumen [21]. Magnetic resonance imaging (MRI) MRI has 98% sensitivity and specificity in detection of thoracic aortic dissection. MRI shows the site on intimal tear, type and extent of dissection, presence of aortic insufficiency, as well as surrounding mediastinal structures. It has the advantage of not using contrast material; thus, it is preferred in patients with renal impairment or allergic to iodine [22]. Limitations of MRI: • Not suitable for hemodynamically unstable patients. • Requires much more time than CT. • Not suitable for patients with pacemaker and other metallic implants [23]. The gold-standard diagnostic modality for aortic dissection Benefits include accurate visualization of the true and false lumen, intimal flap, aortic regurgitation, and coronary arteries. Differential Diagnosis of Chest Pain • Not suitable for patient with renal impairment or allergic to contrast. • Not suitable for hemodynamically unstable patients. • The false lumen and intimal flap may not be visualized if the false channel is thrombosed. • Simultaneous opacification of the true and false lumen may mask the dissection. Diagnosis Diagnosis is usually done through imaging. The choice of the imaging technique depends on the patient condition (whether he is hemodynamically stable or not). Chest radiography is the initial basic imaging technique, but it may reveal no abnormality. Further imaging options like computed tomography (CT) and CT angiography with three-dimensional reconstruction are of higher diagnostic value. Magnetic resonance imaging (MRI) is as accurate as CT and may benefit patients who have adverse reaction to intravenous drug agents. In hemodynamically unstable patient, echocardiography is ideal. Aortography is the gold-standard diagnostic modality. Myocardial infarction Typically presents with severe substernal or left-sided chest discomfort radiating to shoulders or left arm and shortness of breath and can be differentiated from AD by the typical ECG changes and the rise in the cardiac markers. Myocarditis Viral myocarditis is often preceded by flu-like symptoms, fever, joint pain, or features of upper respiratory tract infection. These patients usually present with heart failure, and ECHO is done to exclude it from other causes of heart failure. Pericarditis or cardiac tamponade Presents with sharp chest pain and may have a pericardial friction rub. Patients with tamponade present with cardiogenic shock and have low-voltage ECG with electrical alternans and enlarged cardiac shadow on the Chest X-ray. Pulmonary embolism Classically presents with sudden onset of chest pain, shortness of breath, and hypoxia. In patients suspected to have pulmonary embolism, CT pulmonary angiogram is the definitive investigation to establish the diagnosis. Tension pneumothorax Patients present with sudden onset of sharp chest pain and desaturation with absent breath sounds. Diagnosis can be established by Chest X-ray. DOI: http://dx.doi.org/10.5772/intechopen.89210 Esophageal rupture Often preceded by history of forceful vomiting, upper gastrointestinal endoscopy or instrumentation. Chest X-ray shows pneumomediastinum, pneumothorax, or pleural effusion. Acute management of aortic dissection involves immediate resuscitation • Intensive blood pressure monitoring preferably with arterial line to maintain SBP between 100 and 120 mmHg and heart rate < 60/ min, to prevent the dissection from expanding. This lowering of blood pressure can be attained with: ○ First line-Beta-blockade using labetalol (20 mg iv initially, followed by either 20-80 mg iv boluses every 10 min to a maximal dose of 300 mg or an infusion of 0.5-2 mg/min IV), esmolol (250-500 mcg/kg IV loading dose; then infuse at 25-50 mcg/kg/min, and titrate to maximum dose of 300 mcg/kg/min). ○ Second line-In patients with asthma, allergy, or any contraindication to beta-blockade, calcium channel blockers diltiazem and verapamil can be used. ○ Third line-Vasodilator therapy. If blood pressure remains above 120mmHg and heart rate < 60/min, nitroprusside infusion (0.25-0.5 mcg/kg/min titrated to a maximum of 10mcg/kg/min) can be initiated. This vasodilator therapy should not be used without first lowering heart rate with beta-/ calcium channel blocker. ○ Pain control using iv opioids (tramadol, morphine, fentanyl) ○ Specific management depends on site of dissection Acute type A dissection Acute type A dissection is a surgical emergency with a mortality of 1-2% per hour, as these patients are at high risk of complications such as aortic regurgitation, tamponade, myocardial infarction due to compression of the coronary ostium, stroke, and aortic rupture [24]. This excludes patients with significant comorbidities including prior debilitating stroke, ischemic heart disease, renal failure, malignancy, advancing age, and hemorrhagic stroke, which are associated with a bad prognosis. These patients should also be assessed for any underlying coronary artery disease or any aortic valve disease by intraoperative transesophageal ECHO to identify any wall motion abnormality or aortic valvular defect. In the International Registry of Aortic Dissection (IRAD) review of 547 patients in which 80 percent of type A patients were treated surgically and the remaining 20 percent were treatment medically, inpatient mortality rates were 27 and 56 percent for surgical and medically treated patients. Medically treated patients were those with advanced comorbidities and aged individuals with poor prognosis [25]. Open surgical repair for type A patients involves resection of the dissecting aneurysm and removal of intimal tear, closure of false lumen and repair of aorta using synthetic graft, and aortic valve repair/replacement. Repair of the aortic arch may also be needed depending on the extent of the pathology. Patients with genetic disease like Marfan causing Aortic regurgitation, bicuspid aortic valve or aortitis need aortic valve replacement [32]. An alternative to open surgical repair in type A patients with ischemic complications like renal, mesenteric, and peripheral ischemia is endovascular stent grafting. A novel approach involves the hybrid repair of type A dissection with "frozen elephant trunk repair technique," which involves open surgical repair of the ascending aorta in the form of a traditional elephant trunk and endoscopic stent grafting to repair the descending aorta. Studies have compared the total arch replacement using the frozen elephant trunk repair technique (FET) with the hemi-arch replacement (AHR) for the type A ascending aortic dissection in which the survival for the patients after 5 years was 95.3% for the FET group and 69.0% for the AHR group, indicating that FET techniques prevent further operations for the complications because of the false lumen [25,[33][34][35]. Type B aortic dissection Medical management is preferred for uncomplicated cases of type B aortic dissections unless the dissection or aneurysm expands, ischemic complications or aortic rupture occurs, or the patient has persistent uncontrolled hypertension or chest pain, when surgical treatment or endovascular grafting is to be considered. Conservative treatment for type B patients involves optimal BP control and long-term surveillance with imaging. In the IRAD study of 384 patients with type B aortic dissection [36], 73 percent of patients were treated medically with mortality rate of 13% within the first week of admission. Factors associated with increased mortality were shock on presentation, widened mediastinum, excessively dilated aorta (≥6 cm), periaortic hematoma, patients with coma or altered consciousness, mesenteric or limb ischemia, acute renal failure, and patients who were treated surgically. Endovascular stent grafting is done with the stent covering the dissection leading to thrombosis causing closure of false lumen. Open surgical repair is rarely done in type B patients. It may be needed in those patients with genetic condition like Marfan's syndrome in whom endovascular repair is difficult. Several trials have compared medical management with endovascular stent grafting in uncomplicated patients with type B aortic dissection demonstrating that at 2 years there is no difference in survival in either of the endovascular versus medical groups (89% versus 96%) [37]; however at 5 years the occurrence of the aortic complications is reduced in the endovascular group improving late outcome [38]. Long-term management Optimal blood pressure control is needed to prevent recurrence or aneurysm formation. This is best achieved by oral combination antihypertensive therapy often including oral beta-blockers. Target blood pressure of less than 120/80 mmHg is preferred. Screening of first-degree relatives should be performed with transthoracic ECHO (TTE) to look for aortic aneurysm.
2021-10-27T15:25:07.595Z
2013-09-17T00:00:00.000
{ "year": 2013, "sha1": "e7a573ab332ee30956c5165c6a7b235c5b69fdda", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.53347/rid-24884", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5c3abefa66cb5a553e48cee8ef13c43566ffc5a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
51721171
pes2o/s2orc
v3-fos-license
Glutamate receptor δ2 serum antibodies in pediatric opsoclonus myoclonus ataxia syndrome Objective To identify neuronal surface antibodies in opsoclonus myoclonus ataxia syndrome (OMAS) using contemporary antigen discovery methodology. Methods OMAS patient serum immunoglobulin G immunohistochemistry using age-equivalent rat cerebellar tissue was followed by immunoprecipitation, gel electrophoresis, and mass spectrometry. Data are available via ProteomeXchange (identifier PXD009578). This generated a list of potential neuronal surface cerebellar autoantigens. Live cell-based assays were used to confirm membrane-surface antigens and adsorb antigen-specific immunoglobulin Gs. The serologic results were compared to the clinical data. Results Four of the 6 OMAS sera tested bound rat cerebellar sections. Two of these sera with similar immunoreactivities were used in immunoprecipitation experiments using cerebellum from postnatal rat pups (P18). Mass spectrometry identified 12 cell-surface proteins, of which glutamate receptor δ2 (GluD2), a predominately cerebellar-expressed protein, was found at a 3-fold-higher concentration than the other 11 proteins. Antibodies to GluD2 were identified in 14/16 (87%) OMAS samples, compared with 5/139 (5%) pediatric and 1/38 (2.6%) adult serum controls (p < 0.0001), and in 2/4 sera from patients with neuroblastoma without neurologic features. Adsorption of positive OMAS sera against GluD2-transfected cells substantially reduced but did not eliminate reactivity toward cerebellar sections. Conclusion Autoantibodies to GluD2 are common in patients with OMAS, bind to surface determinants, and are potentially pathogenic. Opsoclonus myoclonus ataxia syndrome (OMAS), also known as "dancing eye syndrome," is a rare disorder that mainly affects children. OMAS is characterized by conjugate, asynchronous, multidirectional eye movements (opsoclonus), myoclonus, ataxia, behavioral and sleep disturbance, and sometimes cognitive decline. [1][2][3] The clinical and imaging assessment of the disease suggest involvement of the cerebellum and/or pontine omnipause neurons. MRI in the acute phase is usually normal, but recently, patients with long-standing OMAS have been shown to have a reduction in the cerebellar gray matter volume, especially in the vermis and flocculonodular lobes, alongside a more generalized reduction in cortical thickness. 4 In pediatric OMAS, the age at onset is typically within the relatively narrow 12-to 36-month age range. 2,5 Furthermore, OMAS associates with an underlying neuroblastoma in approximately 50% of pediatric patients. 1,6 Neuroblastoma is the most common solid tumor of childhood, derived from the sympathetic nervous system, and occurs almost exclusively in infancy and early childhood, with a median peak age between 18 and 24 months. 7 While the precise pathogenesis of OMAS is undefined, the close association with neuroblastoma strongly suggests a paraneoplastic autoimmune process. B cell expansions with elevated levels of B cell activating factor have been shown in the CSF of patients with OMAS, 8,9 and an HLA association has been established in some patients. 10 Moreover, the neuroblastomas have marked lymphocytic infiltrates, akin to the thymic histology observed in early-onset myasthenia gravis. 11 Finally, some studies describe binding of OMAS patient serum immunoglobulins to Purkinje cells, the surface of cerebellar dendritic arborizations, and to a few candidate neuronal proteins, although no reproducible antigenic targets have yet been established. [11][12][13][14][15] Indeed, in one recent study, serum immunoglobulin G (IgG) precipitated 7 neuronal proteins found in neuroblastoma cell lines but none were shown to be direct targets of the autoantibodies. 16 The striking overlap of symptom onset in OMAS and the peak age of neuroblastoma detection led us to hypothesize that this temporal juxtaposition was important in the pathogenesis of OMAS. Furthermore, the above observations strongly implicate cerebellar structures in disease etiology. Therefore, in our search for putative pathogenic autoantibodies in OMAS, we hypothesized an advantage to using cerebellar tissue representative of humans at approximately 2 years of age. Here, we combine immunohistology, immunoprecipitation, mass spectrometry, and bioinformatic techniques on age-equivalent rat cerebellar tissue and identified autoantibodies to the extracellular domain of glutamate receptor δ2 (GluD2) in the sera of pediatric patients with OMAS. Patient material OMAS serum samples (data available from Dryad, table 1, doi.org/10.5061/dryad.tq61224) were collected at diagnosis from 16 children (median age 2 years, range 1-8.5 years); further samples were available at 48 weeks in 5 of these patients. Eight (53%) were male and 11 (73%) had an associated neuroblastoma. As outlined in the table, disease control sera were available from children with new-onset epilepsy (median age 2.2 years, range 0.5-3 years, n = 78), Rasmussen encephalitis (age 8.2 years, range 1-18 years, n = 23) and autoimmune and other forms of encephalitis (age 8.25 years, range 0.4-15 years, n = 38), and from healthy adult controls (n = 37). Resected neuroblastoma tissue from one patient (18-month-old female) was available for study. Sera from 4 patients with neuroblastoma but without neurologic dysfunction (absence or presence of neurologic syndrome is a recorded field) were obtained from the Children's Cancer and Leukaemia and Tissue Bank, Leicester Royal Infirmary. antibodies (1:200) and stained with a species-appropriate fluorescently labeled Alexa Fluor secondary antibody and counterstained with DAPI. Isolation of autoantigens The 2 OMAS sera with the strongest binding to the cerebellum and the deep cerebellar nuclei (DCN) and pooled healthy control serum were used for the discovery of autoantigens via immunoprecipitation and analysis by mass spectrometry. Postnatal day 18 rat cerebellum was gently triturated and washed with phosphate-buffered saline, then incubated with 85 to 100 μL undiluted patient or control serum for 60 minutes with occasional inversion before the addition of solubilization buffer (150 mM NaCl, 10 mM Tris HCl, pH 7.4, 1% Triton X-100, protease inhibitor cocktail [P8340; Sigma-Aldrich, St. Louis, MO]) for 60 minutes on ice. The suspension was harvested after 2 rounds of centrifugation (2,000g for 5 minutes). Protein G Sepharose beads (Sigma) were added to the supernatant (3 hours at 4°C) to bind the bound antibody-antigen complexes, then extensively washed stepwise (150 mM through 1 M NaCl in solubilization buffer). The IgG-bound proteins were eluted by heating the Protein G Sepharose beads to 90°C in Laemmli sample buffer and the eluted proteins were electrophoresed (4%-12% sodium dodecyl sulfate gradient gel [WG1402; Invitrogen, Carlsbad, CA]). The protein bands were visualized with Imperial blue stain (ThermoFisher). Analysis by mass spectrometry Eluates from the immunoprecipitation were prepared for liquid chromatography-tandem mass spectrometry by pooling the Laemmli sample buffer eluted immunoprecipitate fractions followed by chloroform-methanol precipitation. Mass spectrometry was performed using data-dependent acquisition on a Thermo Q Exactive mass spectrometer (data available from Dryad, Methods, doi.org/10.5061/dryad. tq61224). Methods for filtering of protein hits are illustrated in figure 1 and data available from Dryad (table 2, doi.org/10. 5061/dryad.tq61224). Cell-based assays Complementary DNA encoding the full-length human GluD2 mature polypeptide (GenBank ID NM_001510; Asp24-Ile1007) was cloned into the pHLsec vector (PMID: 17001101), immediately downstream of the secretion signal sequence and an external hemagglutinin (HA) peptide (YPYDVPDYA), and was used in a live cell-based assay (CBA) to detect antibody binding. Culture and staining procedures for the live CBAs were performed and scored as previously described. 17,18 Sera (from 1:50) or commercial antibodies (1:750) were applied to live transfected cells for 1 hour at RT followed by 4% paraformaldehyde fixation, washing, and incubation with unlabeled goat anti-human Fc-specific antibody (1:750, Fisher A31125), and finally a third antibody layer with Alexa Fluor 568 donkey anti-goat IgG (1:750, Fisher A11057). Binding of commercial antibodies was detected using the appropriate species-specific secondary antibody (Alexa Fluor 568 rabbit anti-mouse A-11061 and Alexa Fluor 568 goat anti-rabbit A-11011). OMAS serum IgG binding and downstream proteomic analyses Initially, adult rat brain sections were used to look for antibodies in OMAS sera. This revealed a distinct pattern of immunoreactivity in 4 of the 6 samples tested. This pattern was characterized by widespread IgG immunoreactivity of the cerebellar cortex, especially within the granular layer, and strong IgG binding to the paravermal zone, where the DCN are located; the plane of the section includes the interposed nucleus. There was no evident staining in the white matter of the cerebellum (figure 1A). The 2 OMAS sera with the largest volume of serum available, which showed this pattern (5-yearold female and 2-year-old male; both with neuroblastoma), were used in the antigen discovery experiments. Our previous attempts to identify putative antigens using tissue derived from embryonic or postnatal rat tissue (<P6) had proven unsuccessful (data not shown). Therefore, cerebellar tissue of postnatal rat pups (P17-20), considered to be age-equivalent to 18-to 24-month-old humans, 21 were used instead. Precipitated OMAS IgG-antigen complexes were subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis and compared to a control sera (n = 5). A band of approximately 100 to 110 kDa was identified from OMAS patient gels ( figure 1B). This region was excised from the OMAS and control gels and subjected to analysis by tandem mass spectrometry. The excised bands identified GluD2 in the patient but not control samples. The eluates typically contained approximately 18,000 peptides matching to several hundred proteins were isolated from gel bands. To identify targets of potentially pathogenic autoantibodies, stringent filters were applied to filter out proteins present in the controls, and from the unique ones, to identify cerebellarspecific membrane proteins with an extracellular domain. This reduced the putative target pool to 12 proteins (figure 1, C and D; and data available from Dryad, table 2, doi.org/10. 5061/dryad.tq61224). Of these 12 proteins, GluD2, which shows high cerebellar/Purkinje cell specificity, 22 was detected at approximately 3-fold-higher levels than any of the others proteins and was enriched by 20-fold as compared to healthy control samples. GluD2-specific autoantibodies This identification of GluD2 as an autoantigen was confirmed using a CBA. 17 HEK293T cells were transfected with complementary DNA encoding GluD2 fused to an extracellular HA tag. Expression of GluD2 was verified with a commercial antibody against the intracellular C-terminus of GluD2 ( with neuroblastoma without neurologic features showed GluD2 antibodies ( To confirm antigenic specificity, GluD2-reactive OMAS sera were adsorbed either against GluD2-transfected or untransfected HEK cells. Only GluD2 adsorption eliminated the binding (figure 2, E and F). Furthermore, all OMAS sera were negative for IgG binding to EAAT2 (excitatory amino acid transporter 2), and cerebellin, identified at lower levels by the mass spectrometry (table; data available from Dryad, table 2, doi.org/10.5061/dryad.tq61224). However, γ-aminobutyric acid type B (GABA B )-receptor antibodies were detected in 1 of 15 OMAS and 2 of 139 disease controls. Samples at 48-week follow-up were available post immunotherapy from 8 patients with OMAS, 7 of which had been GluD2 antibody-positive at presentation. Only one sample remained GluD2 antibodypositive at 48 weeks in an asymptomatic patient (data available from Dryad, table 1, doi.org/10.5061/dryad.tq61224). GluD2 expression in cerebellum and neuroblastoma tissue In light of these findings, we revisited the cerebellar staining ( figure 3). Application of GluD2-adsorbed sera to rat cerebellar sections revealed a marked reduction of staining in the granular area and at the site of the interposed nucleus of the DCN. However, residual staining was still observed in the 2 Taken together, these results indicate that autoantibodies to GluD2 are frequently present in OMAS sera and target interposed nuclei of the DCN and other cerebellar structures. However, residual staining after GluD2-specific adsorption implies the presence of additional, as yet unidentified, antibodies that target similar brain regions. Discussion Several convergent datasets strongly suggest OMAS has an autoimmune basis. 1,8,9,[12][13][14][15][16] However, despite several efforts to date, target antigens have remained elusive. In this study, mass spectrometry and bioinformatic techniques using age-equivalent cerebellar tissue were used to identify GluD2 as an autoantigen in OMAS. The expression of GluD2 in neuroblastoma tissue taken from a GluD2 sera-positive patient with OMAS was confirmed by immunofluorescence. Given data implicating cerebellar structures in disease pathogenesis, antigen-specific modulation of GluD2 may underlie some features of OMAS. The results support that antibodies that bind the extracellular domain of GluD2 may be a potentially pathogenic antibody in pediatric OMAS. However, the IgG cerebellar reactivities observed after GluD2-IgG absorption suggest it is not the sole potentially pathogenic agent in OMAS, and future studies should aim to define these other autoantigens. Nevertheless, links between GluD2 and several aspects of OMAS offer intriguing insights and are discussed in more detail below. The ionotropic GluD2 is a cerebellar-specific receptor involved in synaptic organization and thus is an appropriate target for antibodies in OMAS. Children with mutations in the GluD2 gene (GRID2) show developmental delay, a loss of acquired motor skills, ocular apraxia, cerebellar ataxia, and cerebellar atrophy. 23,24 GluD2 is highly expressed on the dendritic spines of Purkinje cells. These cells project GABAergic neurons into the vermis and DCN, the output cells of the cerebellum. Modulation of these projections may alter circuitry of the cerebellum (vermis and fastigial nuclei), the inferior olives, and the brainstem saccade premotor neurons (excitatory and inhibitory burst neurons, and omnipause neurons). 25 Indeed, GluD2-deficient mice, with fewer functional synapses between the parallel fibers and Purkinje cells, have involuntary spontaneous eye and limb movements. 26 GluD2 is especially highly expressed at the parallel fiber-Purkinje cell synapse. At this synapse, GluD2 interacts with cerebellin, 27 a molecule that we also found in the immunoprecipitates from patient IgG-GluD2 complexes. Indeed, by linking GluD2, neuroblastomas, and the cerebellar nuclei, our data generate several hypotheses offering potential insights into OMAS etiology and pathogenicity. First, OMAS is a very rare condition and pediatric onset is most often within a very narrow temporal window of 12 to 48 months. It is known that in this early period, GluD2 expression rises in the cerebellum, 22 and concurrently, neuroblastomas, which we show can express GluD2, are also maturing. It may be this ectopic expression in the neuroblastoma that breaks immunologic tolerance and leads to GluD2 autoantibodies, which can auto react with brain structures. Second, given the IgG staining pattern observed with the OMAS sera, the brain structures that would be targeted by the antibodies include focal cerebellar nuclei. These nuclei have roles in saccadic eye movements, omnipause neuron function, and ataxia. 25 The origin of the myoclonus in OMAS is not well explained on the basis of a purely cerebellar dysfunction and this aspect requires further investigation. GluD2 autoantibodies have been reported previously, although largely using methodology that favors detection of autoantibodies against intracellular epitopes. Several single or small case reports have described antibodies to GluD2, and other glutamate-receptor subtypes, mainly in adult patients with cerebellitis and encephalitis. [28][29][30] However, the peptidebased ELISAs used are unlikely to detect antibodies that react with the surface of native neuronal proteins. By contrast, in one patient with transverse myelitis following allogenic stem cell transplantation, patient serum IgG stained both the cerebellar molecular layer and GluD2-transfected HEK cells. In this study, GluD2 antibodies were not detected in approximately 300 disease and healthy controls. 31 The selection of patient sera and the starting material were both critical in this study. The chosen sera bound to specific areas of the cerebellum, particularly the DCN, while the cerebellar tissue used for the mass spectrometry experiment was obtained from rats at an age equivalent to 18 to 24 human months. Previous experiments using fetal material had been unsuccessful, which is consistent with the very low expression of GluD2 before birth, in both rodents and humans, and its rapid increase post partum. Despite being the first autoantigen with pathogenic potential described at high frequency in a substantial cohort of patients with OMAS, our study has several limitations. First, albeit only studied in a subset of patients, the antibody frequently disappeared rapidly following immunotherapy. However, we are aware of one patient in whom it persisted for >18 years of active disease. 32 The small sample size (n = 8) and later serial sampling (48 weeks from disease onset) did not permit evaluation of correlation of antibodies to disease activity. Second, the serum GluD2 antibody levels were not very high (maximum 1: 400), although, in general, this can depend on the relative and native cell-surface expression of antigenic proteins. In addition, this study only examined serum. CSF may have offered additional information. Third, GluD2 antibodies were not unique to children with OMAS but were also found in 2 of 4 children with neuroblastoma but without neurologic disease. We speculate that these antibodies, which may have increased in response to the underlying tumor are nonpathogenic, not at a critical threshold for that individual, or unable to gain antigenic access through the blood-brain barrier. Nevertheless, this may be biologically plausible as the antibody may have been induced by the presence of an underlying tumor. The antibody was also found in a small number (3.6%) of neurologic controls; the tumor status in all but one of these patients was unknown. However, the significantly increased occurrence of these antibodies vs controls (87.5% vs 3.6%) offers a potentially useful diagnostic test in OMAS. Finally, despite a marked reduction in cerebellar staining after GluD2-reactive IgG removal, some staining remained when reapplied to cerebellar sections, indicating the presence of further autoantibodies to additional antigenic targets. Taken together, our findings provide possible mechanistic explanations for the site of the lesion in OMAS, the characteristic age at OMAS onset, and the relationship between the tumor and the immune system. Antibodies to surface-expressed GluD2 could identify a therapy-responsive disorder that would benefit from early treatment and tumor surveillance. Author contributions Georgina Berridge: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, acquisition of data, statistical analysis. David A. Menassa: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, acquisition of data. Teresa Moloney: analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, acquisition of data. Patrick J. Waters: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, contribution of vital reagents/tools/ patients. Imogen Welding: analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, acquisition of data. Selina Thomsen: analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, acquisition of data. Sameer Zuberi: drafting/revising the manuscript, accepts responsibility for conduct of research and will give final approval, acquisition of data. Roman Fischer: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, accepts responsibility for conduct of research and will give final approval, contribution of vital reagents/tools/patients, acquisition of data, study supervision. A. Radu Aricescu: drafting/revising the manuscript, accepts responsibility for conduct of research and will give final approval, contribution of vital reagents/tools/ patients. Michael Pike: drafting/revising the manuscript, accepts responsibility for conduct of research and will give final approval, acquisition of data. Russell C. Dale: analysis or interpretation of data, accepts responsibility for conduct of
2018-08-06T13:39:54.133Z
2018-07-25T00:00:00.000
{ "year": 2018, "sha1": "4aebfd9dd5c0fb2c594745cfff633d6d5b4f5f2a", "oa_license": "CCBY", "oa_url": "https://n.neurology.org/content/neurology/91/8/e714.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b256d4767be9e226e41eee8054a7ec39b31ecd76", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233433503
pes2o/s2orc
v3-fos-license
PRRT2 gene and protein in human: characteristics, evolution and function This study was designed to characterize human PRRT2 gene and protein, in order to provide theoretical reference for research on regulation of PRRT2 expression and its involvement in the pathogenesis of paroxysmal kinesigenic dyskinesia and other related diseases. Biological softwares Protparam, Protscale, MHMM, SignalP 5.0, NetPhos 3.1, Swiss-Model, Promoter 2.0, AliBaba2.1 and EMBOSS were used to analyze the sequence characteristics, transcription factors of human PRRT2 and their binding sites in the promoter region of the gene, as well as the physicochemical properties, signal peptides, hydrophobicity property, transmembrane regions, protein structure, interacting proteins and functions of PRRT2 protein. (1) Evolutionary analysis of PRRT2 protein showed that the human PRRT2 had closest genetic distance from Pongo abelii. (2) The human PRRT2 protein was an unstable hydrophilic protein located on the plasma membrane. (3) The forms of random coil (67.65%) and alpha helix (23.24%) constituted the main secondary structure elements of PRRT2 protein. There were also multiple potential phosphorylation sites in the protein. (4) The results of ontology analysis showed that the cellular component of PRRT2 protein was located in the plasma membrane; the molecular function of PRRT2 included syntaxin-1 binding and SH3 domain binding; the PRRT2 protein is involved in biological processes of negative regulation of soluble NSF attachment protein receptor (SNARE) complex assembly and calcium-dependent activation of synaptic vesicle fusion. (5) String database analysis revealed 10 proteins with close interactions with the human PRRT2 protein. (6) There were at least two promoter regions in the PRRT2 gene within 2000 bp upstream the 5' flank, a 304-bp CpG island in the promoter region and four GC boxes in the 5' regulatory region of PRRT2 gene and we found 13 transcription factors that could bind the promoter region of the PRRT2 gene. These results provide important information for further studies on the role of PRRT2 gene and identify their functions. Background The proline-rich transmembrance protein 2 (PRRT2) gene located in chromosome 16 p11.2 has 4 exons with a total length of 3794 bases and encodes 340 amino acids. The PRRT2 protein is a presynaptic membrane protein that plays an important role in cell exocytosis and neurotransmitter release. However, the detailed functions of the protein remain unclear. Chen et al. discovered for the first time the causative mutation of this gene in paroxysmal kinesigenic dyskinesias (PKD) in 2011 [1]. Subsequent studies have further confirmed that mutations in the PRRT2 gene are a major cause of PKD. In addition, the PRRT2 gene is also involved in the benign familial infantile seizures (BFIS) and infantile convulsions with paroxysmal choreoathetosis (ICCA) [2,3]. Bioinformatics is a field of science that combines biology, computer science, engineering, and applied mathematics to process and analyze information on DNA and protein sequences and structures, based on the massively stored biological experiments and derived datasets. The bioinformatics discipline contributes to the establishment of theoretical models, the setup of experimental research, and the genomics and proteomics studies. In this study, we set out to analyze the physical and chemical properties and molecular structure of PRRT2 using the bioinformatics approach, and predict the functions of PRRT2 in cells. In addition, as the sequence of the human PRRT2 gene promoter has not been recorded in the NCBI database, and no bioinformatic analysis of the PRRT2 promoter has been reported, we also screened for potential promoter sequences of the human PRRT2 gene from the genomic database and analyzed transcription factors as well as their binding sites and CpG islands in this gene. The bioinformatics results will lay a foundation for in-depth study of functions of PRRT2 in the pathogenesis of PKD and other diseases, and for the design of gene therapy. This study will also provide theoretical reference for the construction of PRRT2 gene promoter expression vector and determination of the gene promoter function in subsequent experimental studies. Methods The homology of human PRRT2 with the other species was analyzed with the DNAMAN 8.0 software, and phylogenetic analysis was carried out by MEGA 5.10. The molecular weight, theoretical isoelectric point (pI), amino acid composition, formula, protein stability, half-life, hydrophobicity and transmembrane regions of human PRRT2 protein were analyzed using the online softwares ProtParam, ProtScale and TMHMM. The signal peptides in human PRRT2 were predicted by the Sig-nalP 5.0 software. The phosphorylation site of human PRRT2 was analyzed by NetPhos 3.1 software, and the nuclear localization sequence of the protein was predicted by cNLS-mapper. The functional domain, secondary and tertiary structures of the protein were analyzed by using the SMART, SWISSMODEL, Swiss-pdbviewer and Pymol tools. The Gene Ontology (GO), signaling pathway, and protein interaction analyses were carried out by using the Compartments online software, The Human Protein Atlas database and QuickGO2 database. The potential promoter in the 5′ regulatory region of human PRRT2 gene was predicted and analyzed by online softwares Neural Network Promoter Prediction, Promoter 2.0 and TSSG. The transcription factor binding sites in the 5′ regulatory region of human PRRT2 gene, and common transcription factors, were analyzed with online softwares AliBaba2.1 and PROMO. The CpG island in the promoter region of human PRRT2 gene was predicted with EMBOSS and MethPrimer softwares. The download information and websites of these softwares are listed in the Additional file 1. Results The analysis of human PRRT2 protein The homology analysis of human PRRT2 protein The human PRRT2 gene was located in the short arm of chromosome 16 (16p11.2) and encodes 340 amino acids. Its specific position is chr5: 29812193-29815920, containing 4 exons. The homology of Homo sapiens PRRT2 protein with that of the species Pongo abelii, Cavia porcellus, Equus caballus, Rattus norvegicus, Mus musculus, Bos taurus, and Danio rerio were 97.35, 82.85, 83.24, 79.07, 78.03, 65.56, and 23.98%, respectively. The protein sequences of the eight species were aligned using the DNAMAN 8.0 software (Fig. 1), and the phylogenetic tree of PRRT2 protein was constructed using the neighbor-joining (NJ) method based on the sequence homology in MEGA7 software [11] (Fig. 2). The phylogenetic tree showed that Homo sapiens and Pongo abelii were the closest relatives in PRRT2 protein evolution. Mus musculus had a close relationship with Rattus norvegicus and they were grouped as a cluster. Other species were related more distantly. These results suggest that the human PRRT2 had smallest genetic distance (0.009) from Pongo abelii, followed by Cavia porcellus (0.090), and had longest genetic distance from Danio rerio (0.951) ( Table 1). Physical and chemical properties of the human PRRT2 protein The physical and chemical properties of PRRT2 protein were analyzed by ProtParam, and results showed that the protein was composed of 340 amino acids, with a molecular weight of 34944.91, and a theoretical pI of 4.64. The formula of PRRT2 protein was C1508H2414N426O507S10, having 4865 atoms in total, 45 negatively charged residues (Asp + Glu), and 25 positively charged residues (Arg + Lys). The estimated halflife was 30 h (mammalian reticulocytes, in vitro). The instability index was 68.54. Therefore, this protein was classified to be unstable according to the criterion that assigns a protein with instability coefficient [12] < 40 as stable, and > 40 as unstable. Hydrophilicity/hydrophobicity analysis of the human PRRT2 protein The hydrophilicity and hydrophobicity of human PRRT2 protein was analyzed online using the ProtScale program. The results of hydrophobicity based on the K-D method are shown in Fig. 3, where the score value higher than 0 indicates a hydrophobic amino acid, while the score lower than 0 indicates a hydrophilic amino acid. The highest score (3.278) was at alanine 330, which was the most hydrophobic site; the lowest score (− 2.678) was at aspartic acid 145, which was the most hydrophilic site. Of the 332 amino acids (5-336) in the human PRRT2 protein, 77.71% (258 amino acids) of the amino acids had a score < 0, and 22.29% (74 amino acids) had a score > 0, indicating that the human PRRT2 protein was a hydrophilic protein. Consistently, results from ProtParam analysis showed that the Aliphatic index of human PRRT2 was 68.06 and the Grand average of hydropathicity was − 0.538. Prediction of signal peptide and nuclear localization sequence of human PRRT2 protein The signal peptide of human PRRT2 protein was predicted with Signal P5.0, a signal peptide prediction server (Fig. 4). The values of C, Y, and S were all calculated by the program to be 0. From these data, it could be concluded that the human PRRT2 protein had no signal peptide. Nuclear localization sequence prediction with the cNLS-mapper revealed that the PRRT2 protein had no nuclear localization sequence [13]. When setting the cut-off at 8-10, the protein was specifically located in the nucleus. When the cut-off value was 7 or 8, part of it was predicted to be located in the nucleus. When setting the cut-off value at 3-5, it was predicted to be located in the nucleus and cytoplasm. When the cut-off value was 1-2, the predicted localization was in the cytoplasm [14]. Prediction of the transmembrane domain of PRRT2 protein The TMHMM prediction showed that there were 340 residues in two transmembrane regions (Fig. 5). Amino acids at positions 291-314 had intracellular location, and amino acids at positions 268-290 and 315-337 form two typical transmembrane helical regions, and amino acids at positions 1-267 and 338-340 were located outside the cell. Analysis of the phosphorylation sites of PRRT2 protein Phosphorylation and dephosphorylation play an important role in the process of cell division and signal transduction in eukaryotes. NetPhos3.1 analysis predicted that the PRRT2 protein contained 77 phosphorylation sites, including 25 serine phosphorylation sites, 8 threonine phosphorylation sites, and 1 tyrosine phosphorylation site (Fig. 6). Secondary and tertiary structure analysis of human PRRT2 protein SMART online software analysis showed the distribution of Pfam:CD225 domain in amino acids at positions 264-331 (Fig. 7). The secondary structure of human PRRT2 protein was predicted through the website Prabi. The results showed that the main types of secondary structure of this protein was alpha helix, with a total number of 79 (accounting for 23.24%), and the protein also contained 230 random coil structures accounting for 67.65%, and 31 extended strands accounting for 9.12%. The distribution of secondary structure was shown in Fig. 8. The tertiary structure of human PRRT2 protein was analyzed by the homologous modeling method based on the Swiss-model website. The scores of GMQE and QMEAN were 0.08 and − 3.91, indicating that the prediction was not satisfactory, which might be related to the low degree of template coverage (only 10.94%). Further analysis of the similarity waveform of human PRRT2 protein with its homologous protein (Fig. 9) also showed a low prediction value (less than 0.6), so this model is not ideal. Subcellular localization, tissue-specific expression and GO analysis of human PRRT2 protein Subcellular localization analysis was conducted through the Compartments online software, and the results showed that the protein was localized on plasma membrane (Source from PSORT, Evidence was 31/32). The Human Protein Atlas database showed that PRRT2 RNA tissue specificity was enhanced in brain. The Go analysis via QuickGO 2 showed that the human PRRT2 protein had cellular component located in the plasma membrane (GO:0005886), and had molecular functions of syntaxin- Fig. 2 The phylogenetic tree of PRRT2 proteins among different species Protein interaction The interaction network of human PRRT2 protein was constructed from the String database with confidence set at 0.400 and number limited to 10. The results showed that there were 10 proteins that may interact with the human PRRT2 protein, including KRAS, HRAS, ELK1, PRKD1, MAPK3, MAPK1, SDC3, KIT, ADRA1B, and VEGFC (Fig. 10, Table 2). The GO analysis results and signal transduction pathways of the human PRRT2 protein and the interaction proteins are shown in Table 3. Table 4. BLAST tool comparison of the 2000 bp 5′ upstream sequence of the human PRRT2 gene with the human PRRT2 gene promoter sequence HPRM39687 found on the GeneCopoeia website showed a consistency of 81%. The full length of HPRM39687 was 1444 bp, and the transcription start site (TSS) was located at 1240 (G). The 557-2000 bp sequence within the 5′ upstream of the PRRT2 gene was completely consistent with HPRM39687, and the 1896 (G) base corresponded to the 1240 (G) base of the HPRM39687 sequence. We speculated that the PRRT2 gene promoter was located within this 1500 bp from 5′ upstream of the PRRT2 gene. Identification of the TATA box, GC box and CAAT box The TATA box sequence had a format of TATAWAW (W stands for A or T), the GC box sequence had a format of GGGCGG, and the CAAT box sequence had a format of CCAAT. There were four GC boxes in the 5′ regulatory region of human PRRT2 gene, located at -773--768, -1146--1141, -1950--1945 and -1956--1950, but no TATA box or CAAT box was found. Prediction of the CpG island in the human PRRT2 gene promoter region The EMBOSS [15] prediction result showed that there was a CpG island with a length of 304 bp, located at 1642 bp-1945 bp of the predicted sequence (Fig. 11). MethPrimer [16] prediction results showed that there were two CpG islands, located at 1271 bp-1391 bp with a length of 121 bp and 1642 bp-1945 bp with a length of 304 bp, respectively. The prediction of the second CpG island was completely consistent with the prediction results of the EM-BOSS software (Fig. 12). Discussion PRRT2 is a proline-rich transmembrane protein type II encoded by 3 exons (exons 2-4), with a total length of 340 amino acids. Recent studies had revealed that the long N-terminus of PRRT2 is located inside the cell, and the C-terminus, which contains only 2 residues, is located outside the cell [17]. PRRT2 is enriched in the presynaptic membrane of neurons in the cerebral regions such as the cortex, hippocampus, basal ganglia and cerebellum, and interacts with the core protein of the SNARE complex, participating in the regulation of synaptic neurotransmitter release and promoted exocytosis of vesicles. PRRT2 also plays an important role in synaptic triggering and synaptic function, and thus has been proposed to be a new synaptic protein [17]. In this study, the amino acid sequences of PRRT2 of different species were obtained from public databases. Homology analysis showed that the human PRRT2 gene and that of other mammalian species were highly conserved during evolution. PRRT2 was predicted to be an unstable hydrophilic protein located on the plasma membrane, which contained two transmembrane domains. The top 10 proteins predicted by the String database to interact with PRRT2 were involved in the Rap1 signaling pathway, the Ras signaling pathway and the MAPK signaling pathway. The promoter, located near the transcription initiation site, is a DNA sequence to which an RNA polymerase can bind to initiate transcription. Here, we used three softwares to analyze promoters within the 2000 bp 5′ upstream sequence of human PRRT2 gene based on different principles and algorithms, and found that the gene had at least two potential promoter regions in the chain of justice, and the TSS was located at 1240 bp G base. In the gene expression regulation network, the combination of transcription factors and cis-acting elements can switch on or off the expression of a specific set of genes. Here, we used AliBaba2.1 and PROMO to predict transcription factorbinding sites in the promoter region of PRRT2 gene. Thirteen transcription factors were simultaneously predicted by both softwares and at the same binding site. The probability of the existence of these transcription factors was relatively high. These predictions provided evidence for the functions of PRRT2, and suggested that PRRT2 expression was regulated by a variety of transcription factors and that PRRT2, which was in a complex metabolic network, had many important physiological functions. Methylation of the CpG island can inhibit the normal transcription process of the promoter, thereby reducing gene expression. In this study, the EMBOSS and MethPrimer softwares predicted consistently the CpG islands in the promoter region of PRRT2 gene. There was one CpG island in the promoter region of human PRRT2, which was located between 1642 bp-1945 bp of [18] the 2000 bp sequence in the 5 'regulatory region, close to the first exon, consistent with the distribution characteristics of CpG island. Some studies have proposed that the transcriptional repression of promoter methylation could hinder the recognition of the binding site by transcription factors, thereby exerting transcription repression [19]. Sp1 is a zinc finger structural protein belonging to the transcription factor SP family, whose classical binding sites are rich in CpG sites [20]. We speculated that Sp1 may directly bind to the promoter region of PRRT2 as a transcription factor, change the promoter activity, and then regulate transcription. It has been reported that methylation of the promoter region can block the binding of transcription factor Spl to the promoter sequence and inhibit the transcription of target genes [21,22]. Studies have confirmed that PRRT2 can interact with the SNARE complex component synaptosomal-associated protein (SNAP25) and is co-localized in the presynaptic and postsynaptic membranes [18,23]. Subsequent evidence has supported the localization of PRRT2 in the presynaptic membrane, especially enriched in the synaptic junctions [17,18,24,25]. However, the exact physiological role of PRRT2 in the presynaptic membrane remains unclear. Valente et al. have confirmed that PRRT2 is a presynaptic membrane protein that is enriched in presynaptic terminals and expressed upon the occurrence of embryonic synapses [17]. The clinical phenotypes caused by PRRT2 mutation vary broadly, including a variety of episodic phenotypes from dyskinesia to epilepsy. Even the same mutation (c.649dupC) can result in different phenotypes, such as PKD, ICCA, benign familial infantile epilepsy, and FS. These results indicated that the PRRT2 gene has the same pleiotropic characteristics as GluT1 and ATP1A2 genes [26][27][28]. PRRT2 protein is widely expressed in the nervous system, particularly in the globus pallidus, cerebellum, subthalamic nucleus, cerebellar foot, caudate nucleus, cerebral cortex, hippocampus, and cerebellum [29]. Studies have confirmed that the mRNA level of PRRT2 changes with the development of mouse brain. The PRRT2 mRNA began to be expressed on embryonic day 16 and then gradually increased. By the 7th day after birth, it is expressed in the brain and spinal cord, and by the 14th day after birth (corresponding to 1-2 years in humans), the mRNA level of PRRT2 reached its peak, and then declined to a relatively low level in adult mice [29]. Moreover, the change of PRRT2 expression with age is consistent with the pathogenesis of some PRRT2-related diseases, such as the age-dependent characteristics of BFIS [30]. Therefore, the high expression of PRRT2 in the brain and the age-dependent expression pattern can partially explain the heterogeneity of PRRT2 mutation-related phenotypes. PRRT2 mutations are associated with a variety of sudden diseases such as dyskinesia, epilepsy and migraine, indicating an overlap between the molecular pathogenesis of these diseases. It had been confirmed that PRRT2 proteins are mainly expressed in the cerebral cortex, hippocampus, basal ganglia and cerebellum, and enriched in the presynaptic membrane of neurons. More importantly, these areas are in line with the neuronal origin of putative PRRT2-related diseases. In the PRRT2related diseases, the heterogeneous distribution of PRRT2-positive excitatory neurons and inhibitory neurons in different brain regions, and the insufficient single dose of PRRT2 caused by mutations, may lead to regionspecific neuronal excitability and inhibitory. The imbalance between them can eventually lead to synaptic Conclusion In this study, we first obtained PRRT2 gene sequences from the NCBI GenBank database, obtained the 2000 bp sequence upstream to the 5′ flanking of the PRRT2 gene, and then used different bioinformatics softwares to predict the promoter, CpG island and transcription factors of the PRRT2 gene. The results provided a basic theoretical basis for the construction of vectors for PRRT2 gene promoter expression and the detection of promoter activity, and the in silico data can provide reference for future functional studies. However, more studies are needed to advance the research on PRRT2.
2021-04-29T13:38:26.450Z
2021-04-29T00:00:00.000
{ "year": 2021, "sha1": "e0ccbf88ad04c2489125b67bee107e321dbd39ec", "oa_license": "CCBY", "oa_url": "https://aepi.biomedcentral.com/track/pdf/10.1186/s42494-021-00042-4", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "afd436667ae861147b46d7cc05d79d3ebe41a84f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
245236407
pes2o/s2orc
v3-fos-license
Evaluation of Plasma Amyloid Peptides Aβ1-40 and Aβ1-42 as Diagnostic Biomarker of Alzheimer's Disease, its Association with Different Grades of Clinical Severity and 18F-Fluorodeoxyglucose Positron Emission Tomography Z score in the Indian Population: A Case-Control Study Background: We estimated plasma amyloid-peptides levels (Aβ1-42 and Aβ1-40) as diagnostic biomarker of Alzheimer's disease (AD) and evaluated its association with clinical severity and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) Z score of the different brain regions in the Indian population. Patients and Methods: A case-control study was conducted. Diagnostic and statistical manual-IV, Dubois, and NIA-AA criteria were used for the diagnosis of AD. The plasma Aβ1-42 and Aβ1-40 concentration and 18F-FDG PET Z score were estimated for different brain regions. Results: Forty-seven cognitive impairment patients (AD = 29, mild cognitive impairment = 18) and 33 age-matched controls were enrolled. Plasma Aβ1-42 level was significantly higher in the AD group compared to controls (P = 0.046) and a cut-off >5.7 ng/mL has a specificity of 96.9%, sensitivity of 27.6%, positive predictive value 88.9%, and negative predictive value 60.4% for differentiating AD patients from controls. Significant correlation was seen between Aβ1-40/Aβ1-42 ratio and 18F-FDG PET Z score in the bilateral-parietal, temporal, frontal-association area, and posterior-cingulate areas. Conclusion: As a diagnostic biomarker of AD, plasma Aβ1-42 level showed good specificity but low sensitivity in the Indian population. Introduction Alzheimer's disease (AD) is the most common cause of dementia in older patients (>60-65 years) and accounts for 4.9% of deaths among elderly people in the USA. [1] Global prevalence was about 25 million in 2010 which is anticipated to be doubled by 2030 because of increased life expectancy. AD is predicted to affect one in 85 people globally by 2050. [2,3] For populations above 65 years, the prevalence of AD in Asian countries varies from 6.44% in South India, 4.86% in Shanghai (China), and 3.92% in Sri Lanka. [4] Despite such a significant effect of AD on the human race and decades of research devoted to finding a cure for this dementing illness, little has been achieved in terms of cure or reduction in the rate of its progression. This is partly related to an inherent problem in that the pathogenic process in AD starts years before clinical onset and drugs are likely to be most effective if started in preclinical phase or in the early stage of mild cognitive impairment (MCI) or AD. To know the effects of the intervention, one should be able to diagnose MCI with certainty and to determine which MCI patients are going to progress to Alzheimer's disease and also the rate of disease progression. There are various imaging and laboratory biomarkers (decreased cerebrospinal fluid [CSF] Aβ 1-42 , increased CSF tau, decreased 18F-Fluorodeoxyglucose [18F-FDG] uptake on cerebral cortices positron emission tomography [PET], amyloid PET imaging, and measures of brain atrophy on magnetic resonance [MR]), which can assist in the diagnosis of AD. However, these are either invasive (CSF), expensive, and not readily available. [5,6] Recently, significant attention had been given to the role of plasma biomarkers in the early diagnosis of AD as well as in its differentiation from other forms of dementia. The most commonly used plasma biomarkers include serum amyloid peptides. Because plasma sampling is simpler and less invasive than lumbar puncture, it is well suited to use in old age patients or when multiple measures are required, such as in clinical trials. However, the published data on plasma Aβ levels in AD is conflicting. One study indicated that low or decreasing plasma Aβ42 levels and Aβ42/ Aβ40 ratio were related to cognitive decline during the follow-up. [7] A high variation in the prevalence and progress of AD among different geographic regions is noted, which can be an indicator of the difference in the pathogenesis of AD among different geographic regions (e.g., variation in the incidence of different AD causing mutations in different population, variation in cultural and dietary factors, and prevalence of different forms of inherited patterns). Again, amyloid-beta negative Alzheimer's disease is also a known entity. However, till now no study has evaluated the association between plasma amyloid-beta level, clinical dementia stages, and 18F-FDG PET Z score in the Indian population. This is the first study to evaluate the same in the Indian population. Thus, we planned the current study to determine the role of plasma Aβ 1-40 and Aβ 1-42 levels in the diagnosis of Alzheimer's disease. Patients and Methods The current cross-sectional study was conducted in the Department of Pharmacology and Neurology at apex care and teaching hospital in northern India. The study was started after getting approval from the institutional ethics committee (Histo/15/IEMEC/37) and written informed consent from all participants. The patients were recruited from 2014 to 2015. During this period, the patients with dementia were screened for inclusion in the study. The diagnosis of dementia was made based on the diagnostic and statistical manual (DSM)-IV criteria. [8] Patients with dementia were then evaluated in detail to determine the exact etiology of dementia. All these patients underwent detailed hematological (complete hemogram including erythrocyte sedimentation rate and c reactive protein) and biochemical (blood sugars, renal and liver function tests, thyroid function tests, serum electrolytes, calcium, and phosphorus) investigations. All these patients also underwent electrocardiogram and echocardiogram, serum venereal disease research laboratory, and testing for human immunodeficiency and hepatitis viruses. Neuroimaging (MR imaging) and 18F-FDG PET imaging were done in a few of these patients. Other investigations including chest X-ray, ultrasonography of abdomen, vasculitis profile, thyroid peroxidase antibodies, toxicology profile, serum Vitamin B12 levels, electroencephalography, and CSF analysis were performed wherever indicated. The patients who were diagnosed to be suffering from AD based on Dubos criteria [9] and MCI [10] were included in the study. The procedure for the selection of cases is depicted in Figure 1. The inclusion and exclusion criteria for the study groups are given below: Once included, all these participants were further subjected to detailed clinical history and examinations as well as neuropsychological battery was administered by the trained neuropsychologist. 18F-fluorodeoxyglucose positron emission tomography scan Regional images of the brain were acquired 45-60 min after the IV injection of 150-180 MBq of 18F-FDG using a standard protocol. Normalized metabolism score (Z score) in different brain areas was estimated using automated software (cortex ID V.1.04, GE Healthcare, Wisconsin, USA). In cortex ID v. 1.04, the patient's data are subtracted from age-matched normal population data and a difference of more than 2 standard deviation (SD) (Z score >2) in a cortical area denotes significant hypometabolism as compared to the healthy population. Statistical analysis Statistical analysis was performed by the Statistical Package for the Social Sciences version 22 (IBM corporation, Newyork, version 22). The continuous data were analyzed by independent t-test or one-way analysis of variance with Bonferroni post hoc analysis. Dichotomized data were analyzed by Chi-square test or Fisher exact test whichever was applicable. Receiver operative curve analysis of plasma Aβ 1-40 and Aβ 1-42 levels was done in MedCalc software to determine the sensitivity and specificity. The two-tailed P < 0.05 with 95% confidence interval was considered statistically significant. Z score from PET scan data was calculated and correlation study was performed between plasma amyloid peptides (individual values and ratios) with the Z score for the AD and MCI groups. Results The current study included 47 cases of cognitive impairment (AD-29; MCI-18) and 33 controls after the screening of 191 participants. The mean (±SD) age was 69.8 (±9.9) years in the AD group, 68.7 (±7.07) years in the MCI group, and 60.9 (±9.05) years in the control group. Men constituted 14 (48.3%) of participants in the AD group, 15 (83.3%) in the patients of the MCI group, and 23 (70%) in the control group [ Table 1]. Among AD patients, 7 (24.1%) patients had mild, 15 (15.8%) had moderate, and 7 (24.1%) patients had severe dementia. The mean age was significantly lower in controls to AD and MCI patients. Regarding associated medical diseases, hypertension was seen in 10 patients in the AD group with a mean duration of 10 years, 11 patients in the MCI group with a mean duration of 14.7 years, and seven patients in the control group with a mean duration of 8.1 years. Seven patients in the AD group had diabetes mellitus with a mean duration of 8.1 years, 5 in the MCI group had diabetes mellitus with a mean duration of 15 years, and 4 in the control group had diabetes with a mean duration of 6.75 years. These, as well as other demographic data, are reported in Table 1. All the patients underwent detailed laboratory investigations as mentioned in the patients and methods section. In comparison, all the investigations were comparable between AD and MCI patients. Neuropsychological assessment tests of study groups In the current study, all the patients and controls underwent detailed neuropsychological assessment [ Table 2]. AD patients were further subdivided into three groups on the basis of MMSE scores; a) Mild (MMSE: 26-21) n = 7; b) moderate (MMSE 20-11) n = 15; and c) severe (MMSE ≤10) n = 7. In the AD group, the mean MMSE score in mean ± SD was 15.1 ± 5.67 and for MCI patients was 25.5 ± 2.68 [ Table 2]. Plasma biomarkers The mean plasma value of Aβ 1-42 was 2.3 ± 1.56 ng/mL in AD patients, 1.6 ± 0.35 ng/mL in the MCI patients, and 1.65 ± 0.62 ng/mL in the control group. When compared plasma Aβ 1-42 was found significantly high in AD patients as compared to the control group [ Table 3]. The plasma amyloid peptides estimation was evaluated in all 80 participants. The plasma value of Aβ 1-40 in mean ± SD was 1.51 ± 1.75 ng/mL in the AD group, 1.26 ± 1.54 ng/ mL in the MCI group, and 0.98 ± 0.66 ng/Ml in the control group. Although a trend of increasing of Aβ 1-40 level was seen in the AD and MCI groups compared to the control, on statistical analysis, the difference was found statistically insignificant [ Table 3]. In the current study, we also measured the ratio of plasma levels of Aβ 1-40 and Aβ 1-42 such as Aβ 1-40 /Aβ 1-42 and Aβ 1-42 /Aβ 1-40 and compared the values among all the groups. We did not find any statistically significant difference for any of these measures among all three study groups [ Table 3]. We further analyzed the sensitivity and specificity of plasma Aβ 1-42 levels for differentiating AD patients from controls. It was found that plasma Aβ 1-42 >5.7 ng/mL has a specificity of 96.9% for differentiating AD patients from controls, though the sensitivity was only 27.6%. Positive and negative predictive values of Aβ 1-42 >5.7 ng/mL for the diagnosis of AD were 88.9% and 60.4% [Supplementary Table 1]. In the current study, AD patients were further subdivided into three subgroups on the basis of MMSE score (Mild = 7, moderate = 15, and severe = 7). Plasma value of Aβ 1-40 was 1.04±0.41 ng/mL in the mild AD group, 1.63±1.83 ng/mL in a moderate AD group, and 1.71±2.43 ng/mL in the severe AD group. The mean plasma value of Aβ 1-42 was 2.16±0.88 ng/mL in the mild AD group, 2.27±1.78 ng/mL in a moderate AD group, and 2.5. 1±1.78 ng/mL in the severe AD group. We did not find any significant difference in both plasma amyloid peptides in AD subgroups. However, we identified an incremental trend in both amyloid peptides as severity increases in AD patients [Supplementary Table 2]. Correlation analysis with plasma amyloid peptides and 18F-fluorodeoxyglucose positron emission tomography Z score Twenty-nine patients who had undergone 18F-FDG PET scan during the workup of cognitive impairment were identified. Out of these, two had mild, 11 had moderate, and 4 had severe AD, while in MCI group 12 were gone through PET scan overall, in AD patients hypometabolism was observed in bilateral parietal and temporal lobes including precuneus and cingulate and mildly reduced in the frontal cortex while in the MCI group some patients showed mildly reduced in the bilateral temporoparietal cortex and cingulate gyrus and some not shown any definitive evidence of hypo/ hypermetabolism in the entire brain. Mean plasma amyloid peptides of all 17 AD patients and were compared to the control group (n = 33) and significant difference in Aβ 1-42 was found (P = 0.03) [Supplementary A significant correlation was found between Aβ 1-40 and Z score in the left temporal association area and between Aβ 1-40 /Aβ 1-42 ratio and Z score for bilateral parietal association areas, median parietal areas, temporal association areas, frontal association areas, posterior cingulate areas, right median frontal area, and average cerebral and global score [ Table 4]. As most of the MCI patients belonged amnestic mild cognitive impairment category so we clubbed all 12 MCI patients with 17 AD patients and correlation was performed as discussed above. We found a significant correlation of Aβ 1-40 with PET Z score in the left parietal association area, left temporal association area, left posterior cingulate area, and left median parietal area [Supplementary Table 5]. Discussion The treatment of AD continues to be far from satisfactory. This is partially related to the fact that by the time AD is diagnosed clinically, the pathological process is already in the advanced stage. Furthermore, to test the efficacy of the new intervention, it is imperative that it is applied at a stage when the pathological process has just begun. In other words, to test the efficacy of a new intervention, one needs to diagnose presymptomatic AD with reasonable certainty. Current investigational modalities (radiological imaging, nuclear imaging, and various CSF biomarkers [Aβ peptides, 1.000 *P-value between AD and MCI groups, ¶ P-value between control and AD group, $ P-value between control and MCI group. The P value is significant between AD and control groups. Aβ 1-40 : Amyloid beta 1-40 , Aβ 1-42 : Amyloid beta 1-42 , AD: Alzheimer disease, MCI: Mild cognitive impairment, SD: Standard deviation p-tau, t-tau, and mmp-9]) which are being used for this purpose, are either too costly or invasive and difficult to applied widely mostly in peripheral hospitals. Thus, in the present study, we tried to assess the role of two amyloid peptides of plasma in the diagnosis of AD. Currently, the CSF level of these biomarkers has been included in the research diagnostic criteria offered by the National Institute on Aging and Alzheimer's Association, and the international working group. Recently, a "Biological definition" of AD has been suggested with the A/T/N classification which used biomarker of β-amyloid pathology (A), tau (T), and neurodegenerative markers (N). [22] The published data on the role of plasma amyloid peptides level in AD are conflicting. Previous studies suggested that during early-stage AD, there is a gradual rise in plasma levels of amyloid peptides but as the disease process progress, their level gradually decreases and finally becomes normal so much so that once AD is clinical evidence, plasma levels of Aβ 1-42 levels are comparable to healthy controls. [23] A study concluded that decreasing levels of Aβ 1-42 in serial measurements may be associated more with cognitive decline than the plasma amyloid-beta peptides and indicate the development of AD [7] while numerous large studies have consistently reported that a lower Aβ 1-42 /Aβ 1-40 ratio in plasma is associated with a higher risk of dementia. [24] In the present study, plasma Aβ 1-42 levels were found significantly higher in AD patients as compared to controls. However, other measures such as plasma Aβ 1-40 and Aβ 1-40 /Aβ 1-42 ratio did not show any significant differences between the three groups. Plasma Aβ 1-42 levels of 5.7 ng/mL had a sensitivity of 27.6% and specificity of 97% in differentiating AD from control with insignificant P value which could be due to large variability in patients and control group's age and study with larger sample size is recommended. 18F-FDG PET is a common molecular imaging technique which used as a biomarker. Basically, it measures the intracellular glucose metabolism and used in various applications in neuroscience including in the study of dementia where it has been used from the past three decades. 18F-FDG PET has become the most sensitive and specific imaging modality for the diagnosis of AD and nowadays it is considered an imaging biomarker for AD before the onset of dementia and in clinical trials. [25] The quantitative analysis of brain hypometabolism shown in 18F-FDG is done by Z score. A positive Z score >2 represents a significant reduction in metabolic activity comparable to the normal reference data. AD patients show hypometabolism in bilateral temporal lobes (middle and inferior temporal gyri), bilateral limbic system (parahippocampal gyrus and posterior cingulate gyrus), bilateral parietal lobe, and bilateral lateral parietal cortex. [26] Womack et al., have found temporoparietal hypometabolism was more sensitive (sensitivity, 93.6% P = 0.003), but posterior cingulate hypometabolism was more specific (specificity, 71.4% P = 0.01) for diagnosing AD. [27,28] In the present study, we found a moderate positive correlation of amyloid peptide ratio (Aβ 1-40 /Aβ 1-42 ) to a Z score of PET in commonly affected brain areas indicating higher Aβ 1-40 /Aβ 1-42 ratios in patients with hypometabolism on PET scan. This promising finding needs to be evaluated in larger studies. Our studies have several limitations. The main limitation is the smaller sample size. Other is the control groups were not fully matched to cases with respect to age and gender distribution. Furthermore, we could not do serial measurements of plasma amyloid peptides in patients with dementia. Conclusion The results of our study reveal relatively low sensitivity of plasma amyloid-beta peptides for differentiating AD from healthy controls. Future studies involving larger sample size and longitudinal measurement of plasma levels of various amyloid peptides will help in better characterization of the role of various biomarkers in differentiating AD from healthy controls. The identification of AD disease in the early phase is still a major challenge, so the combined plasma amyloid and FDG PET approach might be helpful in the early detection of pathological changes in older age individuals.
2021-12-17T16:41:59.893Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "5645b9c11ba284f072dd3dce003b04007cc03d40", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8771055", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ec2fa856234f135727b897ccd457bade9b86ab12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25242405
pes2o/s2orc
v3-fos-license
Range Expansion of Moose in Arctic Alaska Linked to Warming and Increased Shrub Habitat Twentieth century warming has increased vegetation productivity and shrub cover across northern tundra and treeline regions, but effects on terrestrial wildlife have not been demonstrated on a comparable scale. During this period, Alaskan moose (Alces alces gigas) extended their range from the boreal forest into tundra riparian shrub habitat; similar extensions have been observed in Canada (A. a. andersoni) and Eurasia (A. a. alces). Northern moose distribution is thought to be limited by forage availability above the snow in late winter, so the observed increase in shrub habitat could be causing the northward moose establishment, but a previous hypothesis suggested that hunting cessation triggered moose establishment. Here, we use recent changes in shrub cover and empirical relationships between shrub height and growing season temperature to estimate available moose habitat in Arctic Alaska c. 1860. We estimate that riparian shrubs were approximately 1.1 m tall c. 1860, greatly reducing the available forage above the snowpack, compared to 2 m tall in 2009. We believe that increases in riparian shrub habitat after 1860 allowed moose to colonize tundra regions of Alaska hundreds of kilometers north and west of previous distribution limits. The northern shift in the distribution of moose, like that of snowshoe hares, has been in response to the spread of their shrub habitat in the Arctic, but at the same time, herbivores have likely had pronounced impacts on the structure and function of these shrub communities. These northward range shifts are a bellwether for other boreal species and their associated predators. Introduction Temperatures in the Arctic increased rapidly during the 20 th century following centuries of cooling [1,2], and the resulting landscape changes, including increased vegetation productivity and the expansion of shrubs, have been widespread [3][4][5]. The effects of warming and landscape changes on tundra wildlife, in comparison, are poorly documented and limited in spatial or temporal extent. Arctic vegetation in Alaska has been altered by 20 th century warming, yet little affected by direct human impacts, so the region provides a setting to examine the effects of altered habitat on wildlife. Here, we review historical sources across northern Alaska and the pan-Arctic to examine over a century of climatic influence on shrub habitat and moose (Alces alces). We focus our study on moose in the Alaskan tundra because their shrub habitat is known to have increased, and because their large size, unmistakable appearance, and importance as a source of protein for people lent them to historical documentation. In northern and western tundra regions of Alaska, the lack of moose during the 19 th and early 20 th century was tentatively attributed to hunting by indigenous peoples and miners, and the subsequent expansion of moose has been associated with human emigration from inland regions and the resulting reduction in hunting [6,7]. However, increasing temperatures since the mid-19 th century have led to widespread expansion of moose's shrub habitat in the Arctic [8][9][10][11], which could potentially supersede hunting reductions as the cause of moose establishment in the Alaskan tundra. To evaluate whether current moose presence in the Alaskan tundra is due to 20 th century warming and expanded shrub habitat, we used the change in cumulative summer warmth (thaw degree days) from 1850 to 2009, combined with empirical correlations between thaw degree days and shrub height, to estimate riparian shrub height starting in 1860. We compare these reconstructions of shrub height to the habitat preferences for moose [12] to evaluate whether suitable habitat existed during periods when moose were absent from the Alaskan tundra. We discuss shrub habitat expansion alongside other potential factors, such as hunting and wolf predation, in facilitating moose expansion into tundra regions. Moose Distribution and Habitat Moose are the largest member of Cervidae and occupy a diversity of north temperate ecosystems. Although predation, disease, and weather influence population dynamics, suitable climate and habitat facilitate the establishment and persistence of populations, thereby shaping the regional distribution of moose [13]. The northern edge of moose distribution generally follows latitudinal treeline, occasionally extending northward into tundra along major riparian corridors [14,15]. Moose are mainly associated with early-successional habitats and selectively feed on relatively high-quality riparian shrubs, particularly willow (Salix spp.). In tundra and ecotonal regions, availability of forage shrubs above the snow is limiting [16,17]. Moose also utilize dense woody vegetation as cover to reduce detection by predators [18,19], and as protection during wolf attacks [20,21]. Moose on Alaska's North Slope (Alces alces gigas) spend 80-90% of their tracked distance during winter in habitats with shrubs taller than 1 m, and the remainder on frozen riverbeds between thickets [12]. Longer winters and correspondingly shorter shrubs coastward [22] reduce habitat suitability and likely explain why moose distribution ends some distance south of the coast in tundra regions of Alaska, Canada, and Siberia. Moose bones (n = 18) have been recovered from eroding permafrost north of the Colville River that span 3000 ya to present, including several bones that date to the Little Ice Age [23]. The assemblage of carbon dated moose bones suggests at least a periodic presence of moose in the region during the last three millennia, though the date range on some of the more recent bones indicate that they could be modern. Evidence from archeological sites, indigenous peoples, and early explorers documents an absence of moose during the latter half of the 19 th and early 20 th century in tundra areas of Alaska [6,[24][25][26], with few exceptions [6,27]. During the second quarter of the 20 th century, moose began to appear in tundra regions of northern and western Alaska (Fig 1), and similar increases were observed in northern Canada (A. a. andersoni) [6], western Siberia [28], and western Russia (A. a. alces) [29]. By the 1940s, moose populations were becoming established along the riparian shrub corridors of the Colville River and its tributaries in Arctic Alaska. An aerial population survey in late-winter of 1950 revealed 109 moose along a 90-km section of the Colville River floodplain [6]. Extensive late-winter surveys in 1970 and 1977 covering most of the North Slope (Utukok to Kongakut River) recorded between 1550 and 1700 moose [6], with approximately half of those moose residing in the middle Colville drainage [30], confirming their establishment. By the 1980s moose had colonized northwest Alaska (Fig 1) [31]. Estimating Shrub Height We used the relationship between summer warmth and willow height developed by Walker [22] to estimate changes in the height of riparian willows from c. 1860 to 2009. Walker [22,32] measured the 50 tallest willows (Salix richardsonii) at multiple streamside sites along a temperature gradient in Arctic Alaska and derived a relationship between thaw degree days (TDD) and shrub height: shrub height ðcmÞ ¼ 0:000341ðTDDÞ 2 À 0:195ðTDDÞ þ 27:7 Eq:1 , and temperature records were derived at two locations therein (gray dots) [33]. a [16], (R 2 = 0.97, P = 0.002, 0 cm < shrub height < 148.8 cm). Using interpolated historical temperature data from Scenarios Network for Alaska & the Arctic Planning (SNAP) [33] to calculate thaw degree days based on mean monthly temperatures greater than 0°C (average 893 TDD in 1901-1910 vs. 1048 TDD in 2000-2009), we estimated shrub heights between 1901 and 2009 for two locations in Arctic Alaska (Fig 1). We assumed that shrub height was a function of the past 10 years of summer warmth, and therefore used an average of the previous 10 years of thaw degree days to estimate shrub heights annually since 1910. Due to the lack of observed temperature data prior to 1901, we included hindcasted temperatures for 1850-1900 generated by SNAP for the five General Circulation Models (averaged: CMI3P/AR4 5modelavg) that performed best over Alaska [34]. A 95% confidence interval was calculated around the predicted shrub height using Eq 1 and the original data [32]. We assumed that Salix alaxensis, the common floodplain willow species preferred by moose, responded to increases in cumulative warmth similarly to the willow Salix richardsonii, given their co-occurrence on floodplains [35] and their similarly dramatic increase in canopy volume as mean July temperature approaches 12°C [36]. In 2010, we measured heights of streamside shrubs, including the 50 tallest willows (Salix spp.) and Siberian alders (Alnus viridis, ssp. fruticosa) occurring within each of eight 250 m by 250 m plots along the Chandler and Colville Rivers. These two riparian corridors were selected because they have the greatest density of tall shrubs and moose north of the Continental Divide of the Brooks Range [30,37], and would likely have been the first riparian corridors with sufficient habitat for moose. We compared our predictions of shrub heights in 2009 (from Eq 1) to our measured values in 2010. Mean and standard error are reported, unless otherwise mentioned. Results Our hindcasting of shrub height, based on the strong positive relationship between streamside willow height and thaw degree days [22], indicates that the shorter and cooler growing seasons c. 1860 would have resulted in shorter willows. For streamside willows, we estimate that the 25% increase in thaw degree days along the Chandler and Colville Rivers from 1850 to 2009 is correlated with a 79% increase in shrub height (Eq 1 , Fig 2A), from 1. Possible Causes of Moose Establishment in the Arctic The spatial expansion and vertical growth of riparian shrubs in response to warming initiated in the 19 th century would have substantially increased winter moose habitat. Moose in the Alaskan tundra are limited by winter range and 80-90% of their tracked distance in winter is located in habitat with shrubs taller than 1 m [12]. The estimated tall shrub height of 1.10 m c. 1860 suggests that there was little to no moose habitat available (Fig 2A and 2B). Salix richardsonii, the willow species measured along a thermal gradient [22,32] and used here, responds very similarly to mean July temperature exceeding 11.8°C as does Salix alaxensis, both by dramatically increasing canopy volume [36]. Mean July temperature was below 11.8°C c. 1860, again indicating that little to no moose habitat existed then (Fig 2C and 2D). General agreement between the observed riparian shrub heights in 2010 (2.08 ± 0.06 m, range = 1.86-2.37 m) and those predicted by the temperature records in the region (1.97 m) lends support to the thermal gradient approach used here. In tundra regions moose require shrubs protruding above the snow [16,17,38]. In riparian corridors, the valley topography and shrubs dampen the erosive wind events that commonly scour snow from the surrounding tundra, leading to snow depths more than twice as great in the riparian shrubs as on the surrounding tundra [39]. The mean late-winter snow depths of 1860 (a, b). The blue shrub height line (a) uses an average of previous 10 years of thaw degree days to predict shrub height (Eq 1); shaded area indicates the 95% confidence interval; predicted height is consistent with measured shrub heights in 2010 (black box). The blue shrub height line uses interpolated observed temperatures, whereas the red line uses temperatures hindcasted from an average of the five highest-performing General Circulation Models [34]. The mean July temperatures of the late 1800s and early 1900s (solid line is average of previous 10 yrs) were less than 11.8°C (c), indicating that willow canopy volumes would have been greatly reduced (d), consistent with the absence of moose during that period (S1 & S2 Appendices). Temperature sensitivity of shrub canopy volume for Salix richardsonii, the species used to construct (a), is similar to that for the preferred forage species of moose, Salix alaxensis (d; reproduced from [36] 0.51 and 0.57 m reported for successive years across the Kuparuk River basin of Arctic Alaska (adjacent to the Colville basin) excluded the deeper snow found in tall shrubs [40]; doubling the snow depth measured in open tundra [39], provides an estimate of average snow depth among riparian shrubs of approximately 1.1 m (Fig 2B). Little available forage would have protruded above the snow prior to moose establishment, and an increase in average shrub height from 1.1 to 2 m since 1860 would have dramatically increased late-winter forage. The increase in shrub height might have captured more drifting snow, potentially negating some of the gains in available forage, but the scant record of historical snow distribution led us to assume a variable but trendless end-of-winter snow cover amid increasing shrub heights. Finally, adult moose in Alaska stand approximately 1.9 m tall at the shoulder, or approximately 0.8 m above an average riparian shrub stand in 1860, whereas current shrubs are tall enough to obscure and obstruct moose from predators. The only proxy record of shrub production dating to the 19 th century was reconstructed from sediment cores collected from the Colville River delta, and it shows a much greater increase in shrub production (as indicated by increases in fresh particulate organic matter transported to the delta) in the Colville watershed between 1850 and 1950 than after 1950 [8], when repeat photography documents an increase [41]. Floodplain riparian shrub cover in northern Alaska increased from 5% to 13% between 1950 and 2000, and logistic growth rates of shrub cover also suggest that shrub expansion was initiated c. 1875 [41]. Shrub expansion initiated c. 1850-1880 is also supported by direct and proxy air temperature records from Alaska and other Arctic locations showing that warming began between 1850 and 1880, reversing a long-term cooling or stable period before 1850, known as the Little Ice Age [1,10]. Summer warming prior to 1907 is evident from historic photos showing the retreat of glacier terminuses in Arctic Alaska [42] from their Little Ice Age maximum extents-retreat that had accelerated by 1957 [43] and continues today [44]. Increasing mean annual air temperatures 50 to 75 years leading up to the 1950s was also evident in warming permafrost borehole temperature profiles from the region [45]. Warming inferred from proxy records is consistent with temperature data generated by running GCMs backward in time, which show an increase in summer temperature between 1850 and 1950 (Fig 2A and 2C). Our result of increasing shrub habitat linked to moose establishment can confidently be extrapolated across the North Slope and Brooks Range, where summer temperatures are comparable and there is a record of shrub habitat increase. In this region, disturbances are muted and generally confined to riparian corridors, where most shrub expansion has occurred [46,47]. In treeline and forested regions, including parts of the Seward Peninsula and Koyukuk River regions, the record of habitat change is more ambiguous, and thus the linkage between moose establishment and habitat is less reliable. We suspect that shrub expansion has actually occurred more rapidly in treeline regions than in the tundra, and this is supported by anecdotal evidence from the Kobuk River region [48]. Disturbance regimes, notably permafrost thaw and wildfire, are rare in the tundra, but are active in treeline and forested regions, and they amplify the direct effects of temperature on deciduous shrubs by initiating early successional vegetation dominated by shrubs and saplings, thus expanding moose habitat. Increasing permafrost thaw and wildfire associated with warming in the boreal forest and ecotonal regions have likely created more deciduous moose habitat [49][50][51] than in the tundra, but the restructuring of the boreal forest has yet to be quantified, particularly in terms of increasing moose habitat. Hunting Hypothesis A previous study suggested that the lack of moose in tundra areas of Alaska during the early 20 th century was due to hunting by coastal and inland peoples. As people emigrated from inland tundra regions to the coast (indigenous people in the north in the 1920s, miners on the Seward Peninsula in the 1940s), hunting pressure was reduced, allowing moose to immigrate and inhabit riparian corridors of the tundra [6]. Perhaps the greatest strengths of the hunting hypothesis as it applies to northern Alaska are (1) caribou were heavily hunted by coastal communities prior to periods of low caribou abundance from 1870 to 1900, as revealed by archeological, anthropological, and early written records [25], and (2) low caribou abundance and the introduction of Western diseases [52] reduced the inland indigenous population or drove them to the coast by the 1920s, thereby reducing inland hunting pressure. There are ample accounts of indigenous peoples on foot or with dogteam in the boreal forest relentlessly following moose tracks in snow until the animal was reached and shot [53,54], and tracking may have been easier in 19 th century tundra environments where visibility was good and patches of tall shrub habitat were greatly reduced and limited to fragments along floodplains. Unlike for caribou, however, evidence of moose or moose hunting in tundra regions between 1800 and 1900 is scarce. Archeological sites in Alaskan tundra and treeline regions reveal a paucity of moose remains until the mid-20 th century, consistent with the absence of moose in historical accounts and with the minor representation of moose in northern indigenous culture and lore [6,24,27,52]. The introduction of breech-loading firearms during the mid-19 th century corresponded with a rapid decline in caribou populations, but also with the onset of moose expansion from its refugium in Yukon Flats [25]. The hunting hypothesis was articulated well before recognition of climate change and resulting shrub expansion in northern Alaska [9] and the pan-Arctic [3,4]. The current absence of moose along the northern Arctic coast in locations where no forest occurs (Fig 3) is due to longer winters associated with lingering sea ice and correspondingly shorter shrubs [<0.3 m; 22] that fall below the shrub-height habitat requirement [12], not hunting pressure from coastal communities. Furthermore, the snowshoe hare (Lepus americanus), another obligate browser with a similar distribution and habitat requirement to moose in northern North America [15], extended its range northward to the Colville River [30,[55][56][57] during the 1970s. This shift cannot be explained by hunting reduction, but is instead likely due to ameliorating climate and increased shrub habitat [58]. The earlier arrival of moose than snowshoe hares could be due to greater shrub habitat requirements for snowshoe hares, or to faster dispersal rates of moose. We think these findings of decreased habitat during the 19 th century supersede hunting in explaining moose absence. Nonetheless, if the fragmentary moose habitat possibly present during the 19 th century had supported small numbers of moose, then the effect of hunting in the late 19 th century may have been to delay moose dispersal into treeline and tundra regions [25] where new shrub habitat was available. Finally, though wolf predation has been shown to limit moose abundance in many areas [59], wolf populations, like caribou, were increasing during and prior to the expansion of moose in the region [6]. Aerial wolf culling by the U.S. Fish & Wildlife Service was implemented after moose were established [60,61]. Effects of Adding Herbivores to Expanding Shrub Ecosystems Moose and snowshoe hare are known to have pronounced impacts on vegetation composition, structure, net primary production, and ecosystem function [62]. Exclosure experiments in other arctic locations have shown that reindeer (Rangifer tarandus) browsing reduced height and cover of shrubs across a variety of deciduous shrub species [63][64][65], while changes in shrub biomass were generally neutral or negative [66]. The 20 th century arrival of snowshoe hares and moose provides a natural test of the effects of adding herbivores to vegetation already responding to warming, but the preexistence of ptarmigan (mostly Lagopus lagopus) complicates matters. In Arctic Alaska, there is a higher probability of browsing by ptarmigan on palatable willows than browsing by moose and hare combined [66]. Lacking moose and snowshoe hare c. 1850, and with fewer and shorter shrubs, ptarmigan could have been greater in Overlapping areas of northern moose distribution (brown outline) [14,15], earlier spring onset (shaded) [74], and documented shrub expansion (green dots) [3,4]. Shaded areas indicate trends in spring snow cover duration 1972-2008 (see legend). The green line denotes treeline. Increasing shrub habitat along the northern edge of moose distribution portends northward range extension of moose across the Arctic. doi:10.1371/journal.pone.0152636.g003 number and removing proportionally as much biomass as all three herbivores do now. In any case, the addition of moose and snowshoe hares to tundra riparian corridors in northern Alaska has not been sufficient to curb shrub expansion, but their addition may have intensified the effects of browsing on willow shrub architecture, including 'brooming' of stems and sprouting juvenile 'stump' shoots widely observed [67]. In Alaskan floodplains, heavy browsing on willows often favors establishment and dominance by alders, which fix atmospheric nitrogen and are chemically defended against herbivory [68,69]. The indirect effects of browsing increase alder abundance and nitrogen input [62,70], which substantially influences net ecosystem production, and may have contributed to the recent spread of alder in arctic ecosystems [41]. Similarities between Late-Pleistocene and Modern Moose Expansion The range expansion of moose into the Alaskan tundra during this recent ecological shift characterized by warmer temperatures and increased plant production resembles shifts that occurred during the Pleistocene-Holocene transition. Pleistocene steppe tundra with long cold winters and sporadic snow cover was altered by a warmer, wetter Holocene that stimulated widespread shrub expansion. Concurrent changes included the extinction of large-bodied grazers such as horse (Equus spp.) and mammoth (Mammuthus spp.) 14,000 to 11,000 radiocarbon years ago, and the arrival of humans and moose c. 13,000 radiocarbon years ago [71]. The cause of extinction of these large-bodied grazers, whether climate change or human hunting or some combination of both, has been vigorously debated, with recent evidence favoring abrupt climatic change and reduced habitat as the primary cause [72,73], which is consistent with the recent creation of shrub habitat causing establishment of moose in the tundra. The simultaneous arrival of humans and moose during the late Pleistocene warming also implies a similar minimum environmental or habitat requirement; moose required shrubs for winter forage and cover, while humans may have needed shrubs for firewood or shelter [23]. Thus, an interesting implication of our findings is that the recent colonization of Alaskan tundra areas by moose may indicate that the current shrub vegetation and climate conditions are comparable to those that were necessary for paleohuman colonization, notwithstanding other differences between the ancient and modern ecosystems. Conclusion We think that climatic constraints on moose habitat c. 1850 prevented moose from colonizing tundra regions of Alaska. The combination of longer growing seasons [42] and increasing shrub habitat after 1850 [8] allowed moose to colonize tundra regions of Alaska and, later, to sustain populations hundreds of kilometers north and west of previous distribution limits. Increases in shrubs and earlier snowmelt have occurred across most of the Arctic [3,4,74], notably along the northern edge of moose distribution, so increases in abundance and extended northward distribution of moose are anticipated elsewhere (Fig 3). Indeed, the expanded shrub habitat observed across the Arctic may be contributing to observed increasing numbers of moose in parts of northern Canada [6] and northern Eurasia [28,29], as it is in Alaska.
2018-04-03T02:40:47.403Z
2016-04-13T00:00:00.000
{ "year": 2016, "sha1": "a5451062bf0a6b3c17167f7790240949a22efcda", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0152636&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d35ddeff83c7b38eaf7aaf94f26919a19d3c7f08", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
54736266
pes2o/s2orc
v3-fos-license
Clinical Presentation of Brain Tumors Understanding the clinical picture and the signs and symptoms produced by brain tumors is complicated by the extreme heterogeneity amongst these patients. This is secondary to the variability in size, location, pathology and rate of growth of the tumor. In general symptoms can be broadly divided into two categories generalized or focal. Most generalized symptoms are caused by the mass effect and resulting increased intracranial pressure or global cerebral dysfunction caused by the lesion [1]. These typically are clues that a neurological abnormality exists, but are not usually helpful in determining lesion localization. Introduction Understanding the clinical picture and the signs and symptoms produced by brain tumors is complicated by the extreme heterogeneity amongst these patients. This is secondary to the variability in size, location, pathology and rate of growth of the tumor. In general symptoms can be broadly divided into two categories generalized or focal. Most generalized symptoms are caused by the mass effect and resulting increased intracranial pressure or global cerebral dysfunction caused by the lesion [1]. These typically are clues that a neurological abnormality exists, but are not usually helpful in determining lesion localization. Generalized symptoms The most common generalized symptoms are shown in table 1. Headache is the most frequent symptom and occurs in approximately 48-56% of brain tumor patients [2,3]. Headache patterns and location vary greatly depending on mechanism and pathophysiology and this is described in a subsequent section. In general, headaches can be either localized or global in nature and the intensity and rate of progression may provide insight into the rate of growth of the lesion. Lesions with a long history of slowly worsening symptoms over years tend to be more slow growing and benign whereas acute onset headaches with a rapid crescendo pattern are worrisome for a more ominous course. The classic brain tumor headache is one of a global headache often radiating to the vertex or periorbital region which is associated with nausea and vomiting and worse in the am (secondary to CO2 retention and subsequent vasodilation during sleep). Headache 52% Memory loss/cognitive dysfunction 35% Seizures 32% Personality Changes 23% Nausea and Vomiting 13% Table 1. The common generalized symptoms of brain tumor patients [3]. Cognitive changes are not only a common presenting symptom of brain tumors, but also tend to persist even after treatment of the tumor and can affect the patient's overall quality of life and survival [4]. These neurocognitive deficits encompass memory problems, personality changes, and mood disturbances. In some instances, these changes are drastic enough to cause alarm for the patient or family member and lead directly to the diagnosis. These scenarios often include sudden changes, such as the loss of skills related to executive functioning such as paying bills, following directions, job performance or driving and automobile (figure 1). However, in other cases, these changes are slow and insidious and may often be overlooked or attributed to other causes such as aging or stress. In some instances these issues present only following a visit from a distant family member or friend who has not seen or interacted with the patient recently. It is also not uncommon for these issues to remain completely unrecognized and discovered only after specific questioning and inquiry by a physician or as the result of specialized neuropsychological testing. In fact when formal neuropsychological testing is performed on this population of patients almost all of them show at least mild to moderate dysfunction in at least one cognitive domain [4]. It has only been over the past decade that the magnitude and impact that these cognitive deficits have on such patients has been truly recognized. A majority of cognitive processes, including planning, motivation, personality, judgment, and abstraction, are controlled by the frontal lobe. However, a significant amount of these processes require input from various other regions of the brain including the parietal and temporal lobes. Therefore, neurocognitive changes are seen with tumors in multiple locations. The tools for evaluation of cognitive deficits will be discussed in a separate section. Like cognitive deficits, seizures are another symptom of brain tumors that may be present on presentation, or may develop later during the disease process. Tumor-related seizures include both general and focal seizures. The seizure semiology (pattern and symptoms) may provide insight into lesion localization especially in cases with focal seizures. There is a distinction that can be made in the incidence of seizures for various types of brain tumors. Patients with primary brain tumors are more likely to present with seizures or subsequently develop them as compared to patients with brain metastases [5]. In addition, patients with low grade gliomas have seizures more commonly than those with high grade gliomas. One study showed as much as a 85% rate of seizures in those with low grade gliomas as compared to 49% in those with glioblastoma multiforme [5]. Seizures occur secondary to irritation of the cerebral cortex either from the brain tumor itself or the surrounding peritumoral edema. Seizures can result from lesions in any area of the cerebral cortex but are more frequently seen in patients with lesions in the frontal or temporal lobes. Lesions in the brainstem and cerebellum almost never cause seizure activity. Seizures in essence occur as a result of this cortical irritation which causes a "short circuit" within the brain where depolarization rapidly spreads to surrounding areas. Seizure types are broken down into several categories based on the symptoms at the time of seizure onset and whether or not the seizure activity remains focused or spreads throughout the brain. Generalized seizures are those where a large portion of the brain is affected at the onset and as a result of this the patient becomes unresponsive at the onset. Secondary generalized seizures such as the classic partial complex seizure are very common and frequently occur with lesions located in the medial temporal lobe. In these cases symptoms typically start with rhythmic movement on the side contralateral to the lesion but then eventually progress to generalized seizure activity resulting in a loss of alertness and tonic and/or clonic movements on both sides of the body. Patients with generalized or secondary generalized seizures almost always lose consciousness during the event and typically have a period of post-ictal confusion that can last for minutes to hours following such events. However, a less common type of generalized seizure commonly referred to as absence seizures presents with brief starring spells without motor movements. These episodes can occur hundreds of times per day and are not usually associated with post-ictal confusion. Finally, focal seizures occur when the abnormal electrical depolarization remains contained to a small area of the brain. Symptoms with this type of seizure depend on the area involved but usually results in either episodic periods of uncontrolled motor movement and twitching or sensory complaints. The classic Jacksonian March Seizure is commonly seen with lesions in or around the primary motor cortex. These patients exhibit episodes of tonic-clonic activity that typically starts in one area of the contralateral extremity such as the distal leg and the involved activity spreads ("marches") to include a progressively larger area of the body (entire leg and then arm) as the seizure progresses. Patients with focal seizures almost always retain consciousness and awareness during their episodes [1]. Unlike patients with other causes of epilepsy, patients with lesional epilepsy secondary to a brain tumor often progress in frequency, intensity and severity if they remain untreated. It is not uncommon to see a patient with a low-grade glioma who had "a spell" several years ago which was never investigated or brought to the attention of a physician until the patient suffers either repeated or more intense attacks at a later date. The treatment of epilepsy in brain tumor patients varies on a case to case basis. We do not generally recommend prophylactic antiepileptics on these patients for many reasons. Most importantly class I data shows that the routine use of such medications doesn't prevent these patients from having seizures, but does significantly raise the incidence of drug related side effects [6]. In addition, many of these drugs are metabolized through the cytochrome P450 pathway in the liver and can affect the bioavailability of many chemotherapeutic agents and can thus affect the efficacy and side effect profile of these other medications [6]. We typically reserve the use of antiepileptics for patients who present with a seizure or develop one during treatment or for rare instances of "highrisk" patients with temporal lobe lesions who require awake mapping procedures. In these unusual cases we may treat the patient only in the perioperative period. For patients requiring treatment we commonly use leviteracitam unless the use of this is contraindicated. This medication is typically well tolerated by the majority of these patients however in rare instances it can cause or exacerbate headaches or cognitive dysfunction in this patient population. The duration of treatment for patients who present with seizure activity and then do not have any further events remains controversial. In many instances the surgical removal of the epileptogenic trigger may be enough to provide long-term control; however, we recommend continuing antiepileptic medications for at least 6 months or longer and routinely perform EEG prior to considering discontinuation of any antiepileptic medications. If EEG is normal or shows only diffuse changes than medications can usually be stopped safely; however if it shows significant sharp waves or other electrical evidence of cortical irritation than we will routinely advise patients to continue treatment for at least one to two years. Nausea and vomiting associated with brain tumors is typically a result of the increasing ICP from the space-occupying lesion. However, when occurring in the absence of other symptoms it is often difficult to make the diagnosis which is typically made only after extensive workup for other causes such as gastrointestinal issues have been ruled out. In rare instances lesions in the brainstem or other parts of the posterior fossa can lead to pure nausea and or vomiting without other complaints. Focal symptoms Compared to the generalized symptoms described above, focal symptoms of brain tumors can commonly offer clues as to the location of the lesion. This stems from the fact that focal deficits are created from the tumor or resulting edema compressing a specific portion of the brain parenchyma or cranial nerves. Therefore, from the knowledge of the structure and function of the brain, we are able to use the focal deficits found on a patient's exam to predict the location of the tumor. Motor deficits from brain tumors can range from specific weakness in certain extremities to generalized weakness throughout the body. Focal motor deficits occur from a lesion situated in or around the precentral cortex (figure 2). Lesions in the prefrontal area, caudate or basal ganglia can also cause motor deficits but this is typically more of a coordination or fine motor control issue as opposed to hemiplegia. Deficits can also occur when the lesion affects the descending fibers associated with a specific area of the motor cortex. These types of focal deficits are commonly very noticeable to patients and often lead to them seeking medical attention sooner. In some cases, the tumor itself may not be in a specific motor cortex region, but edema from the tumor extends to that region. In those cases, weakness is typically very responsive to treatment with steroids. Like focal motor deficits, sensory deficits are also seen when the tumor or associated edema are lying in a region controlling sensory function such as the post-central gyrus or other areas of the parietal lobe. Some of the common sensory deficits that are seen include: graphesthesia abnormalities, stereoagnosis abnormalities, loss of proprioception, and abnormalities in pain and touch sensation. Graphesthesia is the ability to determine a number or letter that is written on the palm of the hand without watching as it is drawn. Stereoagnosis refers to the ability to determine what an object is that is placed in the hand when the eyes are closed. Proprioception refers to the ability to sense where in space a part of the body is. All of these abilities, along with the ability to sense touch, pain, and temperature can be diminished when a brain tumor is affecting the sensory areas of the parietal lobe [1]. When tumors occur in the regions of the brain controlling or contributing to speech and language, specific forms of aphasia and language deficits can be seen. Statistics show that language deficits of some sort occur in 30-53% of brain tumor patients [7,8]. Like all symptoms, language function will be affected differently in brain tumor patients, but most commonly tumors in the fronto-temporal region are responsible for causing aphasia [7]. Two regions of the brain, known as Broca's and Wernicke's area, are the most documented regions for language control. In greater than 85% of people, these areas are located in the left hemisphere, in the temporal region adjacent to the Sylvian fissure. Broca's area is located in the frontal operculum and typifies the expressive language control center, controlling a person's ability to facilitate speech. The location of Wernicke's area is much more variable; however, in most patients it is located in the posterior aspect of the superior or middle temporal gyrus. It is associated with the control of receptive language, a person's ability to understand both written and spoken language. The four most common types of aphasia include: Broca's aphasia, Wernicke's aphasia, global aphasia and anomic aphasia. A brain tumor causing Broca's aphasia would limit a patient's ability to express their thoughts. These patients understand what they want to say, but the ability to form fluent, sensical words and phrases has been lost. On the other hand, a lesion causing Wernicke's aphasia would cause a patient to create non-sensical, non-meaningful speech. They can speak fluently, but their words and phrases have no meaning. These patients typically are not aware of the meaningless of their speech (figure 3). When both Broca's and Wernicke's area have been affected, global aphasia results. These patients can neither express nor understand speech and language. This includes spoken language in addition to reading and writing. This can occur with large lesions affecting both the dominant frontal and temporal lobes or from smaller lesions which affect the angular gyrus on the dominant side. Finally anomic aphasia occurs when a lesion damages the left temporal region in addition to other lesions in the language pathways. This is best illustrated in a patient who has primarily word finding and naming difficulties. Speech will be fluent to start a sentence, but the patient will then lose the ability to produce the next word. Anomic aphasia is the most common form of language deficit in brain tumor patients [7]. The visual pathway is not specific to one hemisphere or lobe of the brain. Rather it encompasses both hemispheres, multiple cranial nerves, and travels from the retina posteriorly to the occipital lobe. Therefore, there are multiple locations in which visual deficits can be created from a brain tumor. Even compression on the optic nerve creates variability in visual deficits depending on the location along the cranial nerve. When compression occurs at the optic chiasm, bitemporal hemianopsia occurs, meaning that a patient is unable to see temporal peripheral fields out of both eyes. This type of visual loss is very common in patients with tumors in the pituitary region, particularly with non-functioning adenomas that extend into the suprasellar space and cause compression of the optic chiasm [9] (figure 4). However, if compression from the lesion is only on one side of the optic nerve, than visual impairment is only experienced on the affected side. Patients can also complain of changes related to decreased visual acuity. This can occur with lesions anywhere along the optic system. In addition, it is a frequent complaint among patients with hydrocephalus or increased intracranial pressure in which case it is likely secondary to papilledema. Lesions in the occipital lobe cause a homonymous hemianopsia in which the patient loses vision in the contralateral portion of both eyes ( figure 5). The onset of visual deficits is typically very gradual, and may not cause the patient to seek medical attention until the deficits are very severe. In fact in patients with benign slow growing lesions symptoms may not become evident until the patient experiences and accident from running into an object that they didn't visualize secondary to an enlarged blind spot. Cranial nerve dysfunction is much less common than the other above symptoms. These typically occur with lesions affecting the skull base and in many instances multiple cranial nerves may be involved. In general the cranial nerves with pure motor functions (i.e. Facial nerve) are much more resistant to compressive forces than sensory nerves (i.e. Acoustic and vestibular nerves) ( figure 6). In patients with metastatic disease the occurrence of multiple cranial neuropathies is an ominous sign and usually signifies the presence of leptomeningeal disease. Headaches Headaches are probably the most common complaint among brain tumor patients. Headaches can arise from many different pathological reasons. In some instances the headaches are unrelated to the imaging abnormalities and thus may not improve with treatment whereas in others the headaches are directly related to the pathological abnormalities. In many instances the headaches are the result of increased intracranial pressure. This headache pattern is often associated with either large tumors or those lesions that have significant surrounding peritumoral edema ( figure 7). The skull is a fixed rigid space with a limited volume. Therefore any changes in that volume directly affect the pressure within the skull. During early stages or with slowly growing lesions the CNS has the ability to autoregulate and compensate for these changes in tumor or edema volume by decreasing spinal fluid or changes in venous engorgement. However, sudden rapid changes in tumor size or edema can cause changes that cannot be overcome by these typical compensatory mechanisms and thus cause a dramatic change in intracranial pressure (ICP). This relationship of pressure and volume in the skull is referred to as the Monroe-Kelli Doctrine [1]. Headaches as the result of dural inflammation or irritation tend to me more focal. This can be the result of focal involvement of the dura by lesions such as meningiomas or metastasis or from stretching of the dura from tumor growth. A majority of the dura in innervated by the trigeminal nerve. As a result pain can at times be referred to the face, preauricular or periorbital region. However, in most cases the headaches are located unilaterally ipsilateral to the pathology and in some instances directly correlated with lesion location. In my experience headaches that are directly correlated with the location of imaging abnormalities almost always improve with surgical resection. Tumors in the sellar region can commonly cause stretching of the diaphragm sella ( figure 8). These headaches commonly radiate to the periorbital or bifrontal region. The intensity and frequency of these types of headaches commonly fluctuate and can be sporadic in nature likely do to transient changes in local inflammation or pressure in the lesion itself. Tumors that invade or compress the trigeminal nerve typically cause a very classic headache syndrome. Many of these patients experience facial pain syndromes similar to classic trigeminal neuralgia. This can often be dysesthetic in nature and frequently can become very severe and debilitating. The pain is always on the side of the lesion unless there is involvement of the brainstem. Unlike classic trigeminal neuralgia patients pain related to these lesions typically does not respond to medical management (gabapentin, pregabalin or carbamazepine) ( figure 9). In rare instances tumors can be large enough to compress or stretch the large arteries in the brain. Headaches from this cause are infrequent and vary in nature and symptomatology. On the other hand headaches as the result of tumor bleeding tend to be very classic. These headaches are often thunderclap or sudden onset in nature and occur instantaneously with a high intensity. Mental status changes and nausea and vomiting may accompany this type of headache depending on the volume and degree of hemorrhage (figure 10). Table three shows the most common histologies for brain metastasis which result in hemorrhage. Table 3. Most common histologies for brain metastasis that result in hemorrhage Lesions located in the posterior fossa can cause headaches that refer pain to the vertex especially if there is associated hydrocephalus. In addition, pain may also be referred to the auricular and post auricular region secondary to innervation of the petrous dura and tentorium. These patients may also complain of pain in the suboccipital region. In rare instances of tonsilar herniation either from posterior fossa lesions or hydrocephalus patients may complain of severe neck pain at the base of the skull which worsens with extension. The determination of which headache patient to image is always a difficult decision for the primary care or emergency room doctor as there are many more patients who complain of headaches who don't have intracranial pathology that those who do. My recommendations have always been that for adult patients who previously have not had significant headaches but then start having gradually progressive headaches that imaging should be strongly considered. Patients with rapidly deteriorating headaches or those with thunderclap onset deserve more urgent evaluation. A strong index of suspicion should also be entertained when new headaches occur with nausea and vomiting and persist despite routine headache management. Headaches associated with any other neurological finding or seizure activity also demand urgent imaging. In most instances MRI with and without contrast is the gold standard as CT scanning even when performed with contrast can have significant false negative rates. CT scanning may be adequate when intracranial hemorrhage or hydrocephalus are of concern based on clinical suspicion. Cognitive evaluation for brain tumor patients A majority of patients harboring brain tumors experience changes in cognitive or high level executive functioning [10][11][12][13][14]. Many patients may complain of subjective deficits which often are difficult to characterize, while others can slowly develop large abnormalities and be unaware of the slowly progressive changes ( figure 11). Unless patients are exhibiting major confusion and disorientation these complaints often go unassessed and unaddressed for these patients. In addition, most will exhibit at least partial improvement if a cognitive rehabilitation program is instituted before symptoms become too devastating [15]. Objective screening tests are limited and vary in efficacy in this patient population. Extensive neuropsychological batteries are time consumptive and require a trained neuropsychologist which is not available at most major brain tumor centers or smaller community treatment facilities. In addition, these deficits can change with time and are affected by all treatment modalities: surgery, radiation and chemotherapy. Therefore any reliable screening tool must also take into account the effect of "learning" from prior administrations. Not until the past decade has the importance of assessing cognitive function and evaluating the impact of treatment on such function come into the forefront, despite the known association with cognitive functioning on quality of life and overall survival [16][17][18]. Only recently have prospective studies incorporated some aspect of cognitive evaluation in their study design. The mini-mental status evaluation (MMSE) is the most frequently used tool in clinical practice as well as for many large research studies. This test has numerous drawbacks including extremely low sensitivity and specificity in this patient population. In addition a "ceiling effect" exists [4]. Many patients score normal results on this test despite having significant cognitive impairments. For the past five years I have been using the Montreal Cognitive Assessment Tool (MoCA) with my tumor patients. This is a free screening tool (available at www.mocatest.org) can be administered by office staff or physicians with minimal training and excellent inter-observer reliability. This tool assesses several aspects of cognitive function including: executive function, visuo-spatial function, naming, memory, attention, abstraction, language, orienta-(a) (b) Figure 11. Axial post-contrast T1-weighted and FLAIR MRI in patient with a large olfactory groove meningioma who presented with cognitive dysfunction. tion; and has been used extensively as a screening tool and for serial examinations for numerous different pathological conditions from dementia to heart failure. The MoCA is more sensitive than the MMSE for detecting mild cognitive impairment (MCI) [19][20][21]. Olson compared the efficacy of MoCA vs MMSE in a group of patients with brain metastasis by administering both tests to patients at a similar time point after diagnosis of their brain metastasis. Ninety-eight percent of patients completed the test in less than 15 minutes, and 88% of patients took less than 10 minutes. Based on the results of the study (using normal cutoff scores for both tests) 80% of patients were classified as having at least mild cognitive impairment on the MoCA (score <26) vs. 30% using the MMSE (score <26) [22]. In 2011 the same group reported the results of 58 brain metastasis patients who were studied prospectively [4]. Once again both groups were administered MoCA and MMSE tests, 67% of the patients also underwent formal neuropsychological assessment (NPA) which consisted of a battery of tests taking 3-4 hours to complete. This formal testing was performed within 2 weeks of administration of the screening tests. Study analysis showed that only 7% of patients scored normal on the NPA and an additional 38% had borderline results, the remainder of the patients had cognitive impairment in greater than two domains. MMSE results showed abnormal cognitive function in only 12.8% and MoCA showed impairment in 53.8%. Thus it is clearly illustrated based on the poor sensitivity that the MMSE is a poor screening tool for determining cognitive impairment in these patients and has limited value. The MoCA was more sensitive in determining mild cognitive impairment but still failed to illustrate all cases [4]. Finally, in yet another study they were able to show that the results of the MoCA were highly correlated with overall survival in patients undergoing treatment for brain metastasis but failed to show a relationship of survival to MMSE results [22].
2017-09-18T15:44:32.678Z
2015-03-25T00:00:00.000
{ "year": 2015, "sha1": "72143da608089d539fbeccfc03ab20078ebe281e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/59046", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b553fedfb26c79255f540fe2a27be429b0ab079", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119131936
pes2o/s2orc
v3-fos-license
Dirac Geometry of the Holonomy Fibration In this paper, we solve the problem of giving a gauge-theoretic description of the natural Dirac structure on a Lie Group which plays a prominent role in the theory of D- branes for the Wess-Zumino-Witten model as well as the theory of quasi-Hamiltonian spaces. We describe the structure as an infinite-dimensional reduction of the space of connections over the circle. Our insight is that the formal Poisson structure on the space of connections is not an actual Poisson structure, but is itself a Dirac structure, due to the fact that it is defined by an unbounded operator. We also develop general tools for reducing Courant algebroids and morphisms between them, allowing us to give a precise correspondence between Hamiltonian loop group spaces and quasi- Hamiltonian spaces. Introduction An invariant Poisson structure on a finite-dimensional principal bundle P → B descends to a Poisson structure on the base. This is immediate from the identification of functions on B with invariant functions on P , or alternatively, because the invariant Poisson bivector field π P pushes down to a Poisson bivector field on B. One is tempted to apply these facts to the following infinite-dimensional setting. Let G be a connected Lie group. Its loop group LG = Map(S 1 , G) acts by gauge transformations on the space A = Ω 1 (S 1 , g) of connections on the trivial G-bundle over the circle. The based loop group L 0 G ⊆ LG acts freely, and the holonomy of a connection identifies A/L 0 G with G. We will refer to the resulting principal L 0 G-bundle Hol : A → G as the holonomy fibration. Suppose the Lie algebra carries an invariant metric, used to identify g with g * . It defines a central extension Lg of Lg by R, and one may regard A as the affine subspace of Lg * at level 1. Formally, it carries a Poisson structure called the Lie-Poisson structure, with symplectic leaves the level 1 coadjoint orbits of LG. The naive attempt to push this down to a Poisson structure on G runs into problems, related to the precise meaning of a Poisson structure in infinite dimensions. Indeed, the Lie-Poisson structure on A, viewed as a bilinear bracket {·, ·} on functions, cannot be defined on all functions; its domain does not even contain all pullbacks Hol * f with f ∈ C ∞ (G). Similarly, the Lie-Poisson structure on A cannot be a genuine bivector field, since sections of ∧ 2 T A, by definition, have only finite rank. In this paper, we shall take a third viewpoint, regarding the Lie-Poisson structure on A as a Dirac structure. Recall that a Dirac structure on a finite-dimensional manifold Q is a Lagrangian sub-bundle E ⊆ T Q ⊕ T * Q satisfying a certain integrability condition. Poisson structures are Dirac structures for which E is the graph of a skew-adjoint bundle map T * Q → T Q. In finite dimensions, this is equivalent to the property E ∩ T Q = 0. The definition of Dirac structures carries over to infinite-dimensional Hilbert manifolds, but here the conditions of E being a graph or having trivial intersection with the tangent bundle are no longer equivalent. We will call a Dirac structure E with the latter property a weak Poisson structure. Equivalently, the weak Poisson structures are described as a family of skew-adjoint operators D q : dom(D q ) → T q Q, with dense domain in T * q Q. The leaves of a weak Poisson structure carry closed 2-forms that are weakly symplectic. Taking A to consist of connections of a fixed Sobolev class (e.g. L 2 or higher), we observe that the Lie-Poisson structure is well-defined as a weak Poisson structure in the above sense. The corresponding skew-adjoint operators are the covariant derivatives ∂ A . Using the reduction procedure for Dirac structures [9], this weak Poisson structure may be pushed down under the map Hol. We will show that the result is the wellknown Cartan-Dirac structure on G. The Cartan-Dirac structure had been discovered independently by Alekseev,Ševera and Strobl in the late 1990s, and plays an important role in the theory of D-branes [13,16,20] as well as for quasi-Hamiltonian G-spaces [2,3]. Our reduction procedure extends to Hamiltonian spaces, and clarifies the correspondence [3] between Hamiltonian loop group spaces [26] and q-Hamiltonian G-spaces. We also describe multiplicative properties of the Cartan-Dirac structure [2,22] from the point of view of reduction from suitable spaces of connections. In this article, we will mostly work with a closely related holonomy fibration Hol : A I → G, given by connections on the interval I = [0, 1], with an action of the gauge group G I = Map(I, G). The 'Lie-Poisson' structure on A I is a Dirac structure described by connections ∂ A as before, but whose domain involves periodic boundary conditions. The reduction by the group G I,∂I of gauge transformations trivial at the boundary, results in the Cartan-Dirac structure. From this point of view, we may consider alternative boundary conditions for the family of operators ∂ A , given by Lagrangian Lie subalgebras s ⊆ g ⊕ g. The corresponding weak Poisson structures on A I reduce to generalizations of the Cartan-Dirac structure. Acknowledgements. A. C. thanks University of Toronto for hospitality during the beginning of this project. M. G. was supported by an NSERC Discovery Grant and acknowledges support from U.S. National Science Foundation grants DMS 1107452, 1107263, 1107367 "RNMS: GEometric structures And Representation varieties" (the GEAR Network). E. M. was supported by an NSERC Discovery Grant. We thank Henrique Bursztyn for helpful discussions and for posing the problem of determining the geometric nature of the reduction of Hamiltonian loop group spaces to quasi-Hamiltonian spaces. Dirac structures in infinite dimensions In this section, we review the theory of Courant algebroids, Dirac structures, and their reduction in an infinite-dimensional context. For a treatment of differential geometry on Banach manifolds and Hilbert manifolds, see e.g. [1]. Much of the material is a direct extension of the finite-dimensional theory. Special care needs to be taken due to the fact that the sum of closed subspaces of a Banach space need not be closed. These problems are already apparent in the linear version of the theory, described below. 2.1. Linear Dirac geometry in infinite dimensions. Throughout this paper, the terms Banach space and Hilbert space designate a real topological vector space V whose topology is defined by a Banach norm and Hilbert inner product, respectively. The norm or inner product itself is not considered part of the structure. By [24], a Banach space is a Hilbert space if and only if every closed subspace admits a closed complement. For this reason, we will mainly work with Hilbert spaces and Hilbert manifolds. A continuous symmetric bilinear form ·, · : V × V → R on a Hilbert space V is called non-degenerate if the associated map V → V * , v → v, · is an isomorphism. We will refer to ·, · as a pseudo-Riemannian metric, or simply as a metric, and call V a metrized Hilbert space. We stress that ·, · is not necessarily a Hilbert space inner product. If F is a subspace of a metrized Hilbert space V , denote by F ⊥ its orthogonal relative to the metric. Accordingly, F is called isotropic if F ⊆ F ⊥ , co-isotropic if F ⊥ ⊆ F , and Lagrangian if F = F ⊥ . A Lagrangian splitting of V is a direct sum decomposition V = F 1 ⊕ F 2 into Lagrangian subspaces. In finite dimensions, this is equivalent to F 1 ∩ F 2 = 0, but in infinite dimensions this is stronger: If C ⊆ V is a closed co-isotropic subspace of a metrized Hilbert space, we define a reduced space V C = C/C ⊥ . It inherits a metric from the metric on V . Given a subspace F ⊆ V , define F C = (F ∩ C)/(F ∩ C ⊥ ). In finite dimensions, the reduction L C of a Lagrangian subspace L is again Lagrangian, but this need not be the case in infinite dimensions: Example 2.2. In the setting of Example 2.1, pick v ∈ H − dom(A), and let C = span(v) ⊕ H * . Then C is coisotropic, with C ⊥ = ann(v) ⊆ H * . Hence C/C ⊥ = span(v) ⊕ span(v) * is 2-dimensional. The Lagrangian subspace L = gr(A) satisfies L ∩ C = 0, hence L C = 0 is not Lagrangian. To ensure that the reduction of a Lagrangian subspace is Lagrangian, we need an additional condition: Proposition 2.3. Let V be a metrized Hilbert space, and C a closed co-isotropic subspace of V . Let L ⊆ V be a Lagrangian subspace with the property that L + C is closed. Then A proof is given in the Appendix, see Proposition A.1. Remark 2.4. Given a metrized Hilbert space V , the sum F 1 + F 2 of subspaces is closed if and only if F ⊥ 1 + F ⊥ 2 is closed. Hence, the condition in Proposition 2.3 is equivalent to the condition that L + C ⊥ be closed. Remark 2.5. In subsequent sections, we use vector bundle versions of the results described above. We refer to a Hilbert vector bundle V → M over a Hilbert manifold, with a (pseudo-Riemannian) fiber metric ·, · , as a metrized vector bundle. Given a closed coisotropic subbundle C ⊆ V , the quotient V C = C/C ⊥ inherits a metric. For a Lagrangian sub-bundle L ⊆ V the reduction L C = (L ∩ C)/(L ∩ C ⊥ ) is a Lagrangian subbundle provided L + C is a closed subbundle. In particular, this is the case if the intersection is transverse, i.e. L + C = V , or if L ⊆ C. For any metrized Hilbert space V , let V denote the same Hilbert space with the opposite metric. A Lagrangian relation between two metrized Hilbert spaces is a linear relation whose graph gr(R) ⊆ V 2 × V 1 is Lagrangian. We will write v 1 ∼ R v 2 if and only if (v 2 , v 1 ) ∈ gr(R), and define the kernel and range of R as The space ker(R) is closed, but ran(R) not necessarily so. Similarly, we define ker * (R) = ker(R ⊤ ) and ran * (R) = ran(R ⊤ ), where R ⊤ : V 2 → V 1 is the transpose relation. We have ker(R) = ran * (R) ⊥ , ker * (R) = ran(R) ⊥ . Given another Lagrangian relation R ′ : V 2 V 3 , one defines R ′ • R as a composition of relations. If the V i are finite-dimensional, then R ′ • R is again a Lagrangian relation, but in infinite dimensions additional assumptions are needed. We say that Since ran(R) + ran * (R ′ ) is the image of gr(R ′ ) × gr(R) under the projection V → V /C ∼ = V 2 , this is equivalent to (gr(R ′ ) × gr(R)) + C = V . By Proposition 2.3 this guarantees that R ′ • R is a Lagrangian relation. Taking orthogonals, we see that the transversality (1) implies is uniquely determined. We will call the composition R ′ • R weakly transverse if the condition (2) holds, or equivalently ran(R) + ran * (R ′ ) is dense in V 2 . Definition 2.7. A pair (V, E) consisting of a metrized Hilbert space and a Lagrangian subspace is called a linear Dirac structure. A linear Dirac morphism R : where the composition is weakly transverse (i.e. E 1 ∩ ker(R) = 0). If the composition is transverse (i.e. E 1 + ran * (R) = V 1 ), we will call R a strong linear Dirac morphism. Here the Lagrangian subspaces E i ⊆ V i are regarded as linear relations E i : 0 V i . In the following result, we consider F i ⊆ V i as Lagrangian relations F i : V i 0. is a linear Dirac morphism, and let F 2 be a Lagrangian complement to E 2 . Then Suppose now that the composition is transverse, so that V 1 = E 1 + ran * (R). Let are strong linear Dirac morphisms. Then the composition R ′ • R is transverse, and defines a strong linear Dirac morphism R ′ • R : Courant algebroids. The usual definition of a Courant algebroid [25,32] works equally well for infinite dimensional manifolds. In the remainder of this section we shall use the terms "manifold", "vector bundle", "Lie group", etc. to refer to Hilbert manifold, Hilbert vector bundle, Hilbert Lie group, and so on. A metrized vector bundle is a Hilbert vector bundle with a fiberwise (pseudo-Riemannian) metric. A Courant algebroid is a metrized vector bundle (A, ·, · ) over a manifold Q, equipped with a smooth bundle map a : A → T Q called the anchor, and a bilinear Courant bracket , such that the following axioms are satisfied, for all smooth sections σ 1 , σ 2 , σ 3 of A: Here a * : T * Q → A is the dual anchor composed with the isomorphism A * ∼ = A given by the metric. These axioms imply the following properties [37], for all f ∈ C ∞ (Q): [14] and Pelletier [30] (the latter reference discusses foliations defined by Banach-Lie algebroids). In our main applications the foliation will be explicitly given as the orbits of a Lie group action. (a) Suppose Q is a manifold with a closed 3-form η ∈ Ω 3 (Q). Then the direct sum T Q ⊕ T * Q carries the structure of a Courant algebroid, with metric v 1 + µ 1 , v 2 + µ 2 = µ 1 , v 2 + µ 2 , v 1 , with anchor the projection to the first summand, and with the Courant bracket We will denote this Courant algebroid by TQ η . If η = 0, it is called the standard Courant algebroid and is denoted TQ. Suppose E ⊆ TQ η is a Dirac structure. If [21]). This is called an action Courant algebroid. For any Lagrangian Lie subalgebra s ⊆ d, the subbundle E = Q × s is a Dirac structure in A. Weak Poisson structures. A Lagrangian subbundle E ⊆ TQ with the property E ⊕ T Q = TQ amounts to a continuous skew-symmetric bilinear form π on T * Q, such that E = gr(π ♯ ) is the graph of the associated map. If E is a Dirac structure with this property, we will call π (or E itself) a Poisson structure on Q. In particular, π determines a bracket on C ∞ (Q) in the usual way. For general Banach (as opposed to Hilbert) manifolds, the definition is more involved, see Odzijewicz-Ratiu [29]. Given a leaf O of a Poisson structure, the 2-form ω on that leaf is symplectic, in the strong sense that the bundle map Ω ♭ : T Q → T * Q is invertible. A Dirac structure E ⊆ TQ satisfying the weaker condition E ∩ T Q = 0 will be called a weak Poisson structure; this may be regarded as a family of skew-adjoint unbounded operators. The resulting 2-forms ω on leaves are only weakly symplectic, in the sense that Ω ♭ is injective. In the finite-dimensional setting, the notions coincide. See Posthuma [31,Chapter 4.1] for another definition of weak Poisson structure. Given a weak Poisson structure E, let C ∞ E (Q) be the space of smooth functions f for which there exists a vector field v f with v f +df ∈ Γ(E). Since E∩T Q = 0, the vector field v f is uniquely determined. The elements of C ∞ E (Q) are called admissible [15] or Hamiltonian [1] functions, and v f the corresponding Hamiltonian vector field. The space of Hamiltonian functions is a Poisson algebra for the bracket is a smooth map Φ : Q 1 → Q 2 of the base manifolds, together with a Lagrangian subbundle gr(R) ⊆ A 2 × A 1 along the graph gr(Φ) ⊆ Q 2 × Q 1 , satisfying the following integrability condition: If two sections of A 2 × A 1 restrict to sections of gr(R), then so does their Courant bracket. We will depict Courant morphisms as follows Composition of Courant morphisms is defined as a composition of Lagrangian relations, assuming that the composition is transverse. As shown in [22], the integrability condition is preserved under composition. For x i ∈ A i , we will write x 1 ∼ R x 2 if (x 2 , x 1 ) ∈ gr(R). Similarly, if σ i ∈ Γ(A i ) are sections we write σ 1 ∼ R σ 2 if (σ 2 , σ 1 ) restricts to a section of gr(R). Consider the dual of the tangent map T Φ : T Q 1 → T Q 2 as a relation (5) T * Q 1 That is, the dual of a = (a 2 , a 1 ) restricts to a bundle map a * : gr(T * Φ) → gr(R). Proof. The assertion follows by dualizing the property T Φ • a 2 = a 1 • R, using that Let (A i , E i ), i = 1, 2 be Dirac structures on Q i . We say that (4) defines a Dirac morphism (or morphism of Manin pairs) [11] R : ( if for all m ∈ Q, every x 2 ∈ (E 2 ) Φ(m) is R-related to a unique element x 1 ∈ (E 1 ) m . Equivalently, Φ * E 2 = R • E 1 where the composition is weakly transverse (when the composition is transverse, the Dirac morphism is called strong). The resulting bundle map Φ * E 2 → E 1 defines a comorphism of Lie algebroids R : E 1 E 2 : It is compatible with the anchor, and the map on sections Φ * : Γ(E 2 ) → Γ(E 1 ) preserves Lie brackets. The base map Φ : M → Q is called the moment map. Given a Hamiltonian space, the resulting Lie algebroid comorphism T M E defines an action of the Lie algebroid E on the manifold M [11]. In particular, if E is the action Lie algebroid for a g-action on Q, then one obtains a g-action on M . Example 2.13. Let Q be a manifold with a weak Poisson structure (TQ, E), thus E∩T Q = 0, and let M be a Hamiltonian space, defined by a Dirac morphism R : (TM, T M ) (TQ, E). By Proposition 2.8, the backward image F = T Q •R is a Lagrangian subbundle with T M ∩ F = 0. We conclude that F is again a weak Poisson structure. The map Φ is anti-Poisson for these Poisson structures. [34] if the following sequence is exact: Exact Courant algebroids. A Courant algebroid A with base Q is called exact Equivalently, a * embeds T * Q as a Lagrangian subbundle, defining a Dirac structure (A, ran(a * )). Using the Hilbert structure, the Lagrangian subbundle ran(a * ) admits a closed complement, and by Proposition A.1 one can choose this complement to be Lagrangian. This determines a splitting j : T Q → A such that j(T Q) is a Lagrangian complement to a * (T * Q). We will refer to j as an isotropic splitting. As observed byŠevera [34], the choice of an isotropic splitting identifies A ∼ = TQ η , where the closed 3-form η ∈ Ω 3 (Q) is given by the formula The set of isotropic splittings is an affine space modeled on 2-forms: Given ̟ ∈ Ω 2 (Q), one obtains a new isotropic splitting by the translation with the corresponding 3-form η ′ = η + d̟. A Courant morphism R : A 1 A 2 between exact Courant algebroids will be called exact if the sequence is exact, where a = (a 2 , a 1 ). It turns out that it is enough to know exactness at gr(T Φ): Proposition 2.14. The following conditions are equivalent: [23], that is, a| gr(R) : gr(R) → gr(T Φ) is surjective. As a consequence of the fact that (8) is strongly Dirac, exact Courant morphisms can always be composed (see Proposition 2.9). Another consequence is that one can 'pull back' isotropic splittings: A 2 be an exact Courant morphism, and j 2 : T Q 2 → A 2 an isotropic splitting, with corresponding 3-form η 2 . Then there is a unique isotropic splitting j 1 : T Q 1 → A 1 such that Proof. The subbundle F 2 = j 2 (T Q 2 ) is a Lagrangian complement to ran(a * 2 ). Since (8) is strongly Dirac, Proposition 2.8 shows that its backward image F 1 = Φ * F 2 • R is a Lagrangian complement to ran(a * 1 ). Hence it is of the form F 1 = j 1 (T Q 1 ) for an isotropic splitting j 1 . By construction, this splitting satisfies R • j 1 = j 2 • T Φ. Uniqueness of the isotropic splitting j 1 with this property follows from ker(a 1 ) ∩ ker(R) = 0. Let η 1 be the corresponding 3-form. This is standard in the finite-dimensional case (see e.g. [19]), and the proof carries over to infinite dimensions. In one direction, the 2-form ω relates the splitting j 1 to the pullback of the splitting j 2 (see Proposition 2.15). In the other direction, Φ and ω determine an exact Courant morphism by the condition Since T Q ∩ E = 0, Proposition 2.8 shows that gr(ω) ∩ T M = 0. Equivalently, ker(ω) = 0. 2.6. The Cartan-Dirac structure. Of special interest in this paper is the Cartan-Dirac structure on a Lie group G. We describe here its definition as an action Courant algebroid; later we will show that the same Dirac structure arises by reduction from the Lie-Poisson structure on the space of connections. 2.6.1. Definition of the Cartan-Dirac structure. Let G be a Lie group. For X ∈ g we denote by X L , X R the corresponding left, right-invariant vector fields. The Maurer-Cartan forms on G will be denoted θ L , θ R ∈ Ω 1 (G, g) G ; thus ι(X L )θ L = X = ι(X R )θ R . Suppose G carries a bi-invariant pseudo-Riemannian metric, with corresponding Adinvariant metric (X 0 , X 1 ) → X 0 · X 1 on g. We denote by G the Lie group G with the opposite pseudo-Riemannian metric, and likewise by g the Lie algebra g with the opposite metric. Let D := G × G act on G by It has co-isotropic stabilizers, hence it defines an action Courant algebroid We refer to A as the Cartan-Courant algebroid. If s ⊆ d is any subspace, the subbundle is Lagrangian if and only if s is Lagrangian, and is involutive if and only if s is a Lie subalgebra. Thus, any Lagrangian Lie subalgebra s ⊆ d determines a Dirac structure. The Dirac structure E = E g ∆ ⊆ A defined by the diagonal g ∆ ⊆ d is called the Cartan-Dirac structure. Example 2.20. If κ : g → g is an orthogonal Lie algebra automorphism, then the graph gr(κ) = {(κ(X), X)| X ∈ g} is a Lagrangian Lie subalgebra. Hence it determines a Dirac structure E (κ) = E gr(κ) . If the metric on g is positive definite, then any Lagrangian Lie subalgebra s ⊆ d arises in this way. Indeed, any Lagrangian subspace is then given as the graph of an orthogonal transformation, and the condition that s is a Lie subalgebra means that this transformation preserves Lie brackets. 2.6.2. Splitting. The Cartan-Courant algebroid (11) is exact, with an isotropic splitting j : T G → A given at the group unit by the map g → d, X → 1 2 (−X, X). Equivalently, the map on sections j : for v ∈ Γ(T G). By direct calculation, one find that the resulting 3-form is the Cartan 3-form Let ̺ : A = G × (g ⊕ g) ∼ = TG η be the resulting isomorphism. On the level of sections, Taking X 0 = X 1 = X, we see that the Cartan Dirac structure is spanned by the sections X G + 1 2 (θ L + θ R ) · X for X ∈ g, where X G is the generating vector field for the conjugation action. 2.6.3. Hamiltonian spaces. Suppose s ⊆ d is a Lagrangian Lie subalgebra, defining a Dirac structure (A, E (s) ). The data of Hamiltonian space R : (TM, T M ) (A, E (s) ) for this Dirac structure gives, in particular, a Lie algebra action of s on M , such that Y M ∼ R ̺(Y ) for all Y ∈ s. If R is exact, one can use splittings to formulate these conditions in terms of differential forms. Indeed, Proposition 2.17 specializes to the following statement. For the special case that s is the diagonal, we recover the axioms of a q-Hamiltonian g-space as in [3]. If the action of s integrates to an action of a Lie group S, and if R is S-equivariant, we get a Hamiltonian S-space for (A, E (s) ): That is, ω is S-invariant and Φ is S-equivariant. For instance, the S-orbits in G are Hamiltonian S-spaces for (A, E (s) ). Other examples are obtained by 'fusion', as in [3]. Taking the direct product of this groupoid with the group G, we obtain a groupoid A ⇒ g. Since the groupoid multiplication covers the group multiplication of G, this is pictured as Let Mult A be the groupoid multiplication, defined on the subset of composable elements. Its graph gr(Mult A ) ⊆ A × A × A is a Dirac structure along the graph gr(Mult G ) of the group multiplication, defining a Courant morphism [2], Similarly, the groupoid inversion, Inv The Cartan Dirac structure (A, E) makes G into a Dirac Lie group, in the sense that the groupoid multiplication defines a morphism of Manin pairs, with underlying map the group multiplication [2,22]. More generally, suppose s 1 , s 2 ⊆ d are Lagrangian Lie subalgebras, and that the groupoid multiplication s 1 •s 2 is a transverse composition of linear relations. Then s 1 •s 2 is a Lagrangian Lie subalgebra, and gr(Mult A ) defines a morphism of Manin pairs In terms of the identification A ∼ = TG η defined by the splitting of A, the multiplication morphism is given by the pair ( where pr 1 , pr 2 : G × G → G are the two projections. As shown in [2], this 2-form is Reduction of Dirac structures In this section we continue to use the terms "manifold", "vector bundle", "Lie group", etc. to refer to Hilbert manifolds, Hilbert vector bundles, Hilbert Lie groups, and so on, unless otherwise specified. Actions on Courant algebroids. Let A be a Courant algebroid with base Q. A Courant derivation of A is a linear operatorṽ on Γ(A), together with a vector field for all σ, σ 1 , σ 2 ∈ Γ(A). These properties imply the property, for some σ ∈ Γ(A); we refer to σ as a generator of this Courant derivation. Note that the map A Courant automorphism of A is a vector bundle automorphism preserving the metric, the bracket, and compatible with the anchor. One can informally regard Der(A) as the Lie algebra of the group Aut(A) of Courant automorphisms. In particular, any 1-parameter group ⋉ Ω 2 (Q) consisting of pairs (Φ, ε) with Φ * η + dε = 0; the action of such a pair is given by the Courant morphism TΦ ε . (Since Φ is a diffeomorphism, this morphism is given by an actual vector bundle automorphism of A.) Observe that a(̺(ξ)) = ξ Q are the generating vector fields for an action on Q. We will use the same letter ̺ to denote the associated bundle map The dual map A → g * is sometimes referred to as a moment map for the Courant G action. The set of generators for a Courant G-action is either empty, or is an affine space modeled on the space of G-equivariant maps from g into the kernel of the map σ → [[σ, ·]]. For an exact Courant algebroid, this kernel is identified with the space of closed 1-forms. Example 3.4 (Lie-Poisson structure). Let G be a Lie group with Lie algebra g, let ξ g * ∈ Γ(T g * ) be the generating vector fields for the coadjoint action, and denote by dµ ∈ Ω 1 (g * , g * ) the tautological 1-form. Then the map ̺ : g → Γ(Tg * ), defines isotropic generators for the G-action on Tg * . The Dirac structure E ⊆ Tg * spanned by the sections ̺(ξ), ξ ∈ g is a Poisson structure, in the strong sense that Tg * = E ⊕ T g * . It is known as the Lie-Poisson structure on g * . Reduction of Dirac structures. Let We assume that the action is principal, i.e. that Q is a principal bundle with base manifold Q/G. Proof. The map ̺ is a continuous bundle map. Since the composition a • ̺ : Q × g → T Q is injective, with closed image, ̺ must also have a closed image. We will describe the reduction procedure for the case that the generators ̺(ξ) are isotropic; equivalently, ran(̺) = ̺(Q × g) is isotropic. Theorem 3.6 ([9]). Suppose the generators are isotropic; thus C = ̺(Q × g) ⊥ is coisotropic. Then A C = C/C ⊥ is a G-equivariant bundle, and the quotient bundle with the induced fiber metric, bracket and anchor map is a Courant algebroid. If E ⊆ A is a G-invariant Dirac structure and E + C is a closed subbundle, then the reduction and the reduced bundle This result was proved in [9] in the finite-dimensional setting, and for the case of exact Courant algebroids. However, the proof immediately carries over to the general case. A key observation is that the space Γ(C) G is closed under Courant bracket, containing Γ(C ⊥ ) G as a Courant ideal. Hence the Courant bracket descends to Γ(A red ) = Γ(C) G /Γ(C ⊥ ) G . Further, since a(C ⊥ ) ⊆ T Q lies in the G-orbit directions, we obtain a reduced anchor map a red : A red → T (Q/G). (a) The condition that E + C be closed is trivially satisfied if E ⊆ C, i.e. ran(̺) ⊆ E. One then has E red = (E/ ran(̺))/G, and a(E red ) ⊆ T (Q/G) is the image of a(E) ⊆ T Q under the quotient. (b) Suppose that the action of G on A extends to an action of a Lie group U ⊇ G, with U -equivariant generators ̺ : u → Γ(A) extending those of g. We assume that G is a normal subgroup of U , and that ̺(ξ), ̺(ζ) = 0 for all ξ ∈ g, ζ ∈ u. Then the G-reduced Courant algebroid A red inherits an action of the quotient group U/G, with generators ̺ red : hence ker(a) ⊥ + C is again closed. If A is exact, so that ker(a) = ran(a * ) is Lagrangian, this shows that ran(a * ) + C = A, or equivalently a(C) = T Q. Using these facts, we see that A red is exact, and the Courant morphism q is exact as well. Furthermore, q : (A, ran(a * )) (A red , ran(a * red )). is a strong Dirac morphism. Reduction of Dirac morphisms. The morphism R red has the property is a closed Lagrangian subbundle. Since its space of sections is generated by Γ( gr(R)) G ∼ = Γ(gr(R)) (G 1 ) ∆ , it is also involutive. Hence it is a Dirac structure along gr(Φ). Along gr(Φ), the sum gr(R) + ran(̺)| gr(Φ) is a closed subbundle, for the following reason: since ̺ 1 (ξ) ∼ R ̺ 2 (f (ξ)) for all ξ ∈ g 1 , it coincides with the direct sum of closed subbundles R ⊕ ran(̺ 2 ), and this is mapped by the anchor to the closed subbundle T gr(Φ) = T gr(Φ) ⊕ ran(a • ρ 2 ) in a way which preserves the direct sum decomposition and is injective on the second factor. As a result, its flow-out gr(R) + ran(̺)| gr(Φ) under the action of G is also closed. It follows that gr(R) C = ( gr(R) ∩ C)/( gr(R) ∩ C ⊥ ) is a Lagrangian subbundle of A C along gr(Φ) and hence that gr(R) red = gr(R) C /G, is a Lagrangian subbundle of A red = A C /G along the graph of Φ red . To check integrability of gr(R) red , it is enough to argue locally. Let σ, σ ′ be sections of A red defined near (π 2 (Φ(m)), π 1 (m)), and restricting to sections of gr(R) red . Using local triviality of the principal bundle, these lift to G-invariant sectionsσ,σ ′ of C ⊆ A, defined near (Φ(m), m), and restricting to sections of gr(R). The Courant bracket [[σ,σ ′ ]] has the same property, by integrability of both C and gr(R). Hence [[σ, σ ′ ]] descends to a section of gr(R) red . (b) Let x 1 ∈ A 1 and y 2 ∈ (A 2 ) red ; we must show that x 1 ∼ R x 2 ∼ q 2 y 2 for some x 2 ∈ A 2 if and only if x 1 ∼ q 1 y 1 ∼ R red y 2 for some y 1 ∈ (A 1 ) red . Given the latter property, since y 1 ∼ R red y 2 the definition of R red givesx 1 ∼ Rx2 for somex i ∈ A i withx i ∼ q i y i . The differencex 1 − x 1 is q 1 -related to 0, hence it is of the form ̺ 1 (ξ 1 )| m for some ξ 1 ∈ g 1 . Put x 2 =x 2 − ̺ 2 (f (ξ 1 )). Then x 1 ∼ R x 2 ∼ q 2 y 2 as desired. Conversely, given x 2 with this property, so that x 2 ∈ C 2 , the condition x 1 ∼ R x 2 implies that for all ξ 1 ∈ g 1 , x 1 , ̺ 1 (ξ 1 ) = x 2 , ̺ 2 (f (ξ 1 )) = 0. Hence x 1 ∈ C 1 , which determines an element y 1 with x 1 ∼ q 1 y 1 . By definition of R red , the property x 1 ∼ R x 2 descends to y 1 ∼ R red y 2 . (c) Given R, we show how to express R in terms of R red . Let p i : C i → A i be the quotient maps, and p = p 2 × p 1 . The pre-image p −1 (gr(R red )) is a Lagrangian subbundle along π −1 (gr(Φ red )) = gr(Φ). Its intersection with a −1 (gr(T Φ)) is contained in gr(R). Since A is an exact Courant algebroid, D = a −1 (gr(T Φ)) is a closed coisotropic subbundle; the orthogonal bundle is D ⊥ = a * (gr(T * Φ)). Note that p −1 (gr(R red ))| gr(Φ) + D = ran(ρ)| gr(Φ) + D is closed, by the same reasoning as in part (a). Reducing p −1 (gr(R red ))| gr(Φ) with respect to D, and then taking the inverse image under the quotient map D → D/D ⊥ , we obtain a Lagrangian subbundle along gr(Φ). Since both summands lie in gr(R), the sum (20) is in fact equal to gr(R). Conversely, if the Courant morphism R red is given, we can take (20) as the definition of R. This R is (G 1 ) ∆ -equivariant and intertwines the generators, and an argument similar to (a) shows that it is integrable. For the rest of this Section we shall focus on the case in which there a single group G acting on both A i and equivariance holds with respect to the identity map. . Suppose also that for all m ∈ Q 1 , ξ ∈ g, Then R red defines a Dirac morphism Proof. We have to show that every y 2 ∈ Φ * red (E 2 ) red is R red -related to a unique element y 1 ∈ (E 1 ) red . Let x 2 ∈ E 2 ∩ C 2 be a lift of y 2 . Since R is a Dirac morphism, there exists This element satisfies x 1 , ̺ 1 (ξ) = x 2 , ̺ 2 (ξ) = 0 for all ξ ∈ g, hence x 1 ∈ C 1 . Letting y 1 ∈ (E 1 ) red be the image, we get y 1 ∼ R red y 2 . For uniqueness, suppose y 1 ∈ (E 1 ) red satisfies y 1 ∼ R red 0. Choose elements x i ∈ C i ∩ E i with x 1 ∼ R x 2 , x 1 ∼ q 1 y 1 and x 2 ∼ q 2 0. The last condition gives x 2 = ̺ 2 (ξ) Φ(m) for some ξ ∈ g. Then x 1 ∼ R x 2 but also ̺ 1 (ξ) ∼ R x 2 . By assumption (21), and since R is a Dirac morphism, this implies x 1 = ̺ 1 (ξ). Hence y 1 = 0. Suppose that the composition of R with E 1 is transverse, that is, ran * (R) + E 1 = A 1 . We want to prove ran * (R red ) + (E 1 ) red = (A 1 ) red . Given v 1 ∈ (A 1 ) red , let u 1 ∈ C 1 be a preimage. Write u 1 = x 1 + a 1 with x 1 ∈ E 1 and a 1 ∈ ran * (R). Then a 1 ∼ R a 2 for some a 2 ∈ A 2 . By assumption, for all w 2 ∈ E 2 ∩ ran(̺ 2 ) there exists w 1 ∈ E 1 ∩ ran(̺ 1 ) with w 1 ∼ R w 2 . Therefore, a 2 , w 2 = a 1 , w 1 = 0 for all w 2 ∈ E 2 ∩ ran(̺ 2 ), which proves a 2 ∈ E 2 + C 2 . Modifying the element x 1 , we may arrange that the E 2 -component of a 2 is zero. Hence a 2 ∈ C 2 descends to an element b 2 ∈ (A 2 ) red . Using part (b) from Theorem 3.8, the property a 1 ∼ R a 2 ∼ q 2 b 2 shows the existence of an element b 1 with a 1 ∼ q 1 b 1 ∼ R red b 2 . In particular, b 1 ∈ ran * (R red ). It also follows that a 1 ∈ C 1 , and hence Reduction of exact Courant algebroids. Suppose The following result describes isotropic generators for the action in terms of the splitting. Recall that the Cartan complex of equivariant differential forms on Q is the space of G-equivariant polynomial maps β : g → Ω(Q), with the equivariant differential Proposition 3.11. Changing the splitting by an invariant 2-form 12. If the generators are not necessarily isotropic, one finds instead that d G η G (ξ) = 1 2 ̺(ξ), ̺(ξ) . We now make the additional assumption that the G-action on Q is principal, as in Theorem 3.6, with quotient map π : Q → Q/G. Suppose isotropic generators ̺ : g → Γ(A) are given. Definition 3.13. An isotropic splitting j : T Q → A is called g-horizontal if ̺(Q × g) ⊆ j(T Q), or equivalently ̺(ξ) = j(ξ Q ) for all ξ ∈ g. It is called G-basic if it is both G-invariant and g-horizontal. Thus, an invariant isotropic splitting is G-basic if and only if α = 0. There is a 1-1 correspondence between G-basic splittings j of A and isotropic splittings j red of A red . Under this correspondence, j(T Q) is the pre-image of j red (T (Q/G)) under the quotient map. The three-form of a G-basic splitting j coincides with its equivariant extension, and equals the pullback of the three-form of the reduced splitting j red : Proposition 3.14. Let Q → Q/G be a principal G-bundle with connection θ ∈ Ω 1 (Q, g) G . Let A → Q be a G-equivariant Courant algebroid with isotropic generators, and let j : T Q → A be a G-invariant isotropic splitting. Put where α is given by (23), and c(ξ, ξ ′ ) = ι(ξ Q )α(ξ ′ ) ∈ C ∞ (Q). Twisting the splitting j by ̟, we obtain a G-basic splitting. The resulting 3-form on Q/G satisfies Proof. The ̟-twisted splitting j ′ is given by (7), and the corresponding 1-forms Thus α ′ (ξ) = 0. Remark 3.15. Note that if the splitting j was G-basic to begin with, then ̟ = 0, and hence j ′ = j, for any choice of connection θ. Proof. In the exact case, R defines a strong Dirac morphism (A 1 , ran(a * 1 )) (A 2 , ran(a * 2 )). We have ran(a * i ) red = ran((a * i ) red ), hence by Proposition 3.9 applied to E i = ran(a * i ) we obtain a strong Dirac morphism R red : ((A 1 ) red , (ran(a * 1 )) red ) ((A 2 ) red , (ran(a * 2 )) red ). In turn, this means that R red is exact. We now describe the reduction of exact Courant morphisms in terms of isotropic splittings. Suppose A i → Q i for i = 1, 2 are G-equivariant exact Courant algebroids, with isotropic generators ̺ i : g → Γ(A i ). Let j i : T Q i → A i be G-equivariant isotropic splittings, identifying A i = TQ i,η i for closed 3-forms η i . If the G-actions on Q i are principal actions, and θ i ∈ Ω 1 (Q i , g) are connection 1-forms, defining 2-forms ̟ i ∈ Ω 2 (Q i ) as in (24) and 3-forms η i,red as in (25), then the reduced Courant morphism is where Φ red : Q 1 /G → Q 2 /G is the map induced by Φ, and ω red is given by Proof. By definition, the 2-form ω relates the splitting j 1 with the 'pullback' of the splitting j 2 . Hence, (26) follows from Proposition 3.11. Suppose now that the G-actions are principal. Given connection 1-forms θ i and the corresponding 2-forms ̟ i , let j ′ i be the G-basic splittings obtained by twisting j i by the 2-forms ̟ i . The 3-forms change to η ′ i = η i + d̟ i , which are G-basic and in particular coincide with their equivariant extensions: η ′ i,G = η ′ i . The 2-form describing R = TΦ ω relative to the new splitting is which shows in particular that ω ′ is G-basic. The resulting 2-form ω red with π * 1 ω red = ω ′ describes the exact morphism R red . The Hilbert principal bundle of connections Let G be a connected finite-dimensional Lie group. The holonomy fibration is defined to be the space of connections A I on the trivial G-bundle over the interval I = [0, 1]. By imposing appropriate regularity conditions, the space A I is a principal bundle for the Hilbert Lie group of gauge transformations which are trivial at the boundary ∂I. The principal bundle projection is the map to G given by the holonomy along the interval. A slight modification of this holonomy fibration, which makes contact with the usual theory of loop groups, is studied in Section 6; there we consider connections on the trivial Gbundle over the circle instead of the interval. Also, in our study of the geometry of these fibrations it will be useful to choose principal connections, which may be done via the Caloron correspondence, reviewed in Appendix C. 4.1. Sobolev space notation. We use the following basic properties of Sobolev spaces (see e.g. [7,Section 11] defines a Hilbert Lie group, with Lie algebra g I = Ω 0 H r+1 (I, g). This group acts smoothly by gauge transformations (27) g for g ∈ G I and A ∈ A I . Here θ R ∈ Ω 1 (G, g) is the right-invariant Maurer-Cartan form on G. Note that g is taken to have Sobolev class r + 1 because the involvement of derivatives implies that g * θ R has class r. Given ξ ∈ g I , the corresponding generating vector field ξ A I is given by Hr (I, g) is the exterior covariant derivative associated to A ∈ A I . The action (27) is transitive: given A ∈ A I , the equation is a first order ordinary differential equation for g ∈ G I , and so has a unique solution once an initial condition g(0) is chosen. Furthermore, this solution lies in H r+1 by standard elliptic theory, as required. We define the holonomy map Hol : A I → G in terms of the commutative diagram where the left vertical map is given by g → (g(0), g(1)) and the lower horizontal map is (a 0 , a 1 ) → a −1 0 a 1 . Both horizontal maps may be seen as quotient maps for a principal Gaction, given by multiplication from the left. All maps in the diagram are G I -equivariant, where G I acts on itself by g → k.g, (k.g)(t) = g(t)k(t) −1 , on G × G by (a 0 , a 1 ) → (a 0 k(0) −1 , a 1 k(1) −1 ), and on G by a → k(0)ak(1) −1 . In particular, (31) Hol(k · A) = k(0) Hol(A) k(1) −1 , for k ∈ G I . The map Hol may be regarded as the quotient map for the principal action of the subgroup (32) G I,∂I = {g ∈ G I : g(0) = g(1) = e}. By taking the differential of (31), we see that the differential T Hol : for ξ ∈ g I . 4.3. Principal connections for the holonomy fibration. Any function χ ∈ C ∞ (I) with χ(0) = 0 and χ(1) = 1 defines a connection θ on the principal bundle A I → G. The connection can be described in terms of the corresponding horizontal lift. Let g ∈ G I be any path such that A = g −1 · 0. Using left-trivialization T G = G × g, the horizontal lift for θ is given as Note that this does not depend on the choice of g with g −1 · 0 = A. In Appendix C, we review the 'conceptual construction' of θ, provided by the caloron correspondence. The horizontal bundle defined by θ is invariant under the full action of G I (not only of the structure group G I,∂I of the principal bundle). In particular, one can take χ(t) = t. The resulting connection θ is uniquely characterized by G I -invariance together with the value at the zero connection A = 0, given by That is, the horizontal space at A = 0 is g ⊆ Ω 1 (I, g), embedded as 'constant 1-forms'. Remark 4.1. Suppose r = 0, so that A I consists of L 2 -connections, and suppose that g comes equipped with an Ad-invariant metric (as in Section 5 below). Define a G Iinvariant pseudo-Riemannian metric on A I via Then the connection θ defined by χ(t) = t is the unique connection for which the horizontal spaces are orthogonal to the G I,∂I -orbits for the metric (35). (This is easily verified at A = 0; the claim follows by invariance.) Dirac reduction for the holonomy fibration Let G be a connected finite-dimensional Lie group with a bi-invariant pseudo-Riemannian metric, so that its Lie algebra g is a metrized Lie algebra, that is, it comes with a non-degenerate Ad-invariant symmetric bilinear form, denoted by (X 0 , X 1 ) → X 0 · X 1 . We use the metric to define a G I -invariant "Lie-Poisson" structure on A I , a weak Poisson structure given by an invariant Dirac structure in the standard Courant algebroid TA I . We then explain how to carry out a reduction along the holonomy map Hol : A I → G, obtaining the Cartan-Dirac structure of Section 2.6. We also consider more general weak Poisson structures on A I , which reduce to other Dirac structures on G. Finally, we study the reduction of Hamiltonian spaces for these weak Poisson structures. where ξ A I are the generating vector fields for the G I -action on A I , and the 1-form component is such that Note that this is similar to the formula for the sections spanning the Lie-Poisson structure on g * (cf. Equation (18)). For any subspace s ⊆ g ⊕ g, let g (s) (1)) ∈ s} be the subspace of paths with end points in s. Let E (s) ⊆ TA I denote the subbundle spanned by all ̺(ξ), ξ ∈ g for all ξ, ζ ∈ g I . Furthermore, for any subspace s ⊆ g ⊕ g, one has Proof. The map g I → Ω 1 (A I ), ξ → dA, ξ is G I -equivariant and takes values in closed 1-forms. Since the ξ A I (viewed as sections of TA I ) are generators for the action, so are ξ A I + dA, ξ . In particular, for all g ∈ G I , ξ ∈ g I . Furthermore, which proves (37) by polarization. This also gives the reverse inclusion in (38). For the forward inclusion, we first show that (40) ̺(A I × g I,∂I ) ⊥ ⊆ ̺(A I × g I ). We may use the G I -invariance to assume A = 0. Suppose b + β ∈ T 0 A I is orthogonal to ̺(A I × g I,∂I ). That is, for all ξ ∈ g I,∂I , Let ζ be a solution of ∂ζ = b, with the unique initial condition ζ(0) such that for all X ∈ g, By elliptic regularity, ζ has Sobolev class r + 1, so that ζ ∈ g I . We will show that β = dA, ζ , which then proves that b + β = ̺(ζ)| 0 . Consider the decomposition of T 0 A I into horizontal and vertical directions, relative to the standard connection θ given by (34). The vertical space is spanned by elements ξ A I = ∂ξ with ξ ∈ g I,∂I , and we have The horizontal space is spanned by elements of the form X dt with X ∈ g, and on such elements we have for A ∈ O and ξ 1 , ξ 2 ∈ g (s) I . Proof. From the formula ̺(ξ) = ξ A I + dA, ξ , it is immediate that E (s) ∩ T A I = 0, and that the orbits of E (s) are the orbits of the g (s) I -action. By definition, the 2-forms on these orbits satisfy Hence for all ξ 1 , ξ 2 ∈ g (s) I . The case of the diagonal s = g ∆ (periodic boundary conditions) is particularly important. The Dirac structure E ≡ E (g ∆ ) is called the Lie-Poisson structure on A I . Therefore, they define a central extension of the Lie algebra g Proof. This is a special case of Proposition 2.17 together with Example 2.19. 5.3. Reduction of the Lie-Poisson structure on the space of connections. In this section, we exhibit the Cartan-Dirac structure from Section 2.6 as a reduction of the Lie-Poisson structure on the space of connections over the unit interval. In Section 6, we give a similar construction for connections over the circle S 1 . Since the standard lift of the principal G I,∂I -action on A I to TA I has isotropic generators, we use the machinery of Section 3.2 to define a reduced Courant algebroid (TA I ) red over G = A I /G I,∂I . Proof. The map G I → G × G, k → (k(0), k(1)) descends to an identification G I /G I,∂I = G × G, g I /g I,∂I = g ⊕ g. Let C = ̺(A I × g I ), thus C ⊥ = ̺(A I × g I,∂I ) by (38). By definition, (TA I ) red = (C/C ⊥ )/G I,∂I . Since the action of G I,∂I on g I /g I,∂I is trivial, it follows that (TA I ) red is an action Courant algebroid with the constant sections as the reduced generators ̺ red : g I /g I,∂I → Γ(TA I ) red . The action of [k] ∈ G I /G I,∂I on G is induced from the action of k ∈ G I on A I , and is given by [k].a = k(0)ak(1) −1 , by the equivariance property (31) of the holonomy map. This shows that (TA I ) red is the Cartan-Courant algebroid A, where the isomorphism intertwines the actions and the generators. Since g (s) We now verify the reduction of splittings. As in Section 3.4, let ̟ ∈ Ω 2 (A I ) be the 2-form determined by the principal connection θ. In the notation from that section, for ξ, ξ ′ ∈ g I,∂I , hence defining the G I,∂I -basic splitting j : T A I → TA I via j(a) = a + ι(a)̟. Let j red : T G → (TA I ) red the reduced splitting. To compute it, let β : g → Ω 1 (G) be the map given as (43) ̺ red (0, X) − j red (X L ) = a * red (β(X)) for all X ∈ g, with a red : (TA I ) red = A → T G the reduced anchor. Then for all ξ ∈ g I with ξ(0) = 0. We use (44) to compute the map β, which then determines j red via (43). Let θ be obtained from the function χ ∈ C ∞ (I) with χ(0) = 0, χ(1) = 1, as in Section 4.3. Given X, Z ∈ g, let Then ξ, ζ are the unique paths from 0 to X, Z such that ξ A I | A , ζ A | A are horizontal with respect to θ| A . With this choice of ξ, we obtain Since ζ A ∼ Hol Z L , the left hand side can also be written Hol * ι(Z L )β(X). We conclude β(X) = 1 2 X · θ L , and hence j red (X L ) = ̺ red (0, X) − 1 2 a * red θ L · X. This is consistent with the formulas (12) for the Cartan-Courant algebroid, proving that the two splittings coincide. (a) The above theorem holds for all regularities r ≥ 0 imposed on the connections A I . It thus shows that the reduction (TA I ) red is insensitive to the chosen regularity r ≥ 0. (b) As shown in [4], the 2-form ̟ ∈ Ω 2 (A I ) G I determined by the standard connection θ on the holonomy fibration is given by the formula where θ R ∈ Ω 1 (G, g) is the right invariant Maurer-Cartan 1-form on G, and Hol s : A I → G is given by Hol s (A) = g(s), where g ∈ G I is the parallel transport for A, i.e. g(0) = e and A = g * θ L . This 2-form ̟ also appears in [3, Section 8.1]. (c) The (G× G)-equivariant splittings of the Cartan-Courant algebroid form an affine space for the vector space of bi-invariant 2-forms on the base G. If G is compact or semi-simple, then the space Ω 2 (G) G×G = (∧ 2 g * ) G is zero. Hence, in this case any G I -invariant connection 1-form θ on A I will lead to the same 2-form ̟, and to the same reduced splitting of (TA I ) red = A. Note that if the moment map Ψ is proper, then so is Φ. In this case, the finitedimensionality of G implies finite-dimensionality of M . Recall that the exact Hamiltonian spaces for (TA I , E (s) ) are described by triples (M, σ, Ψ) (see Proposition 5.5), while those for (A, E (s) ) are described by triples (M, ω, Φ) (see Proposition 2.21). Under the correspondence from Proposition 5.8, these are related as follows. Let ̟ ∈ Ω 2 (A I ) be the G I -invariant 2-form defined by the standard connection θ on the holonomy fibration. Proof. In terms of the splittings, we have R = TΨ σ and R = TΦ ω , for an S-invariant 2-form σ ∈ Ω 2 (M) and an S-invariant 2-form ω ∈ Ω 2 (M ). Since the ̟-twist of the standard splitting of TA I descends to the splitting (12) of A, these 2-forms are related by (46). Multiplicative structures. In this subsection, we obtain the multiplicative structures Mult A and Inv A on the Cartan-Courant algebroid A described in Section 2.6.4 as a reduction from appropriate spaces of connections. We begin describing how to get group multiplication Mult G : G × G → G in terms of spaces of connections. Let M denote the space of flat G-connections of class H k on the trivial principal G-bundle over a triangle T ⊆ R 2 (i.e. a 2-simplex), with k > 1. Following [3, Section 9.1], M is a smooth infinite dimensional Hilbert manifold on which the Hilbert Lie group G T = M ap H k+1 (T , G) acts by gauge transformations. Let z 0 , z 1 , z 2 ∈ ∂T be the cyclically oriented vertices of the 2-simplex. (∂T is taken positively oriented w.r.t. T .) We thus define a map ∂T is an orientation preserving parameterization of the edge [z i , z i+1 ] ⊆ ∂T , for i = 0, 1, 2 (z 3 = z 0 ), and we denotedγ(t) = γ(1 − t). Here, we take A I with regularity r = k − 1/2 so that Φ is smooth because k > 1. If we consider the subgroup G T ,Z = {g ∈ G T : g(z i ) = e} acting on M and G I,∂I × G I,∂I × G I,∂I acting on (A I ) 3 , the map Φ is equivariant relative to the group homomorphism f : g → (γ * 2 g, γ * 0 g, γ * 1 g). The induced map since the holonomy around ∂T of a flat connection on T is trivial. At the level of Courant algebroids, the map Φ can be supplemented with the Atiyah-Bott presymplectic 2-form σ ∈ Ω 2 (M) ( [5]). It has the following property (see e.g. [3, Section 9.1]), for ξ ∈ g T = Ω 0 H k+1 (T , g) inducing the infinitesimal gauge transformation Proof. We shall denote q M : TM TM and q A : TA I A the quotient relations and R = (TΦ σ ) red . (Recall that ̺(ξ A | A ) ∼ q A (Hol(A), ξ(0) ⊕ ξ(1)).) It is clear that The induced exact Courant morphism Since the r.h.s. in the Proposition is Lagrangian, we only need to show that R • T M is included in this set. This, in turn, follows from the fact that, given g, h ∈ G and X i ∈ g, i = 0, 1, 2, one can find A ∈ M and ξ ∈ g T so that [A] ≃ (gh, g, h) and ξ(z i ) = X i . For, then, using eq. (48), Remark 5.11. The basic splitting of TA I given in Thm. 5.6 can be used to induce a splitting of T(A I ) 3 which is basic for ̺ ×̺ ×̺. Following Prop. 3.18, the reduced splitting takes the reduced exact Courant morphism (TΦ σ ) red to the form TΦ red,σ red for an induced 2-form σ red ∈ Ω 2 (M red ). Using the identification M red ≃ G 2 , [A] → (Hol(γ * 0 A), Hol(γ * 1 A)), a straighforward computation shows that σ red = ς ∈ Ω 2 (G × G), the 2-form introduced in eq. (17). Finally, we describe the inversion morphism Inv A as a reduction. The diffeomorphism Inv I : A I → A I , a(s)ds → −a(1 − s)ds is G I,∂I -equivariant with respect to the group homomorphism g(s) → g(1 − s) and covers the group inversion Inv G : G → G along the holonomy fibration Hol : A I → G. Moreover, the natural lift Inv A : TA I → TA I of Inv I is equivariant for the actions ̺ and̺, respectively. Recalling the definition of the quotient relation q A : TA I A as in the Proof above, the corresponding reduced morphism (Inv A ) red : Then (Inv A ) red = Inv A coincides with inversion in the groupoid G × d as described in Section 2.6.4. Connections over S 1 In the previous section, we obtained the Cartan-Courant algebroid on G, together with its Cartan-Dirac structure, by reduction along the principal G I -bundle Hol : A I → G for connections on a unit interval. For applications to moduli spaces of flat connections over surfaces with boundary, one is interested in a modification of this construction using the space of connections over a circle, denoted by A S 1 . In this case, the group acting on A S 1 is the loop group LG and, unlike the G I -action on A I , this action is not transitive. In section 6.1, we describe an L 0 G-bundle Hol : A S 1 → G corresponding to the quotient by the based loop group L 0 G and introduce a transitive Lie algebroid R over A S 1 . Here, L 0 G plays the role of G I,∂I and R that of the transitive g I -action on A I . In section 6.2, we introduce an LG-action on the standard Courant algebroid TA S 1 and a weak Poisson structure E analogous to the Lie-Poisson structure on A I . Finally, we show that reduction of (TA S 1 , E) under the L 0 G-action also yields the Cartan-Dirac structure (A, E). 6.1. The holonomy fibration for the circle. Let A S 1 = Ω 1 Hr (S 1 , g) be the space of connections on the trivial G-bundle over the circle S 1 = R/Z. Let be the loop group; the subgroup L 0 G of loops with γ(0) = e is the based loop group. We then define the path space (see Appendix C for its relation to the caloron correspondence, which also makes it clear that it is a Hilbert manifold) The loop group LG acts on PG by (k · g)(t) = g(t)k(t) −1 . This action is a principal action, with quotient map g → g(1)g(0) −1 . The principal action commutes with the G-action on PG by pointwise multiplication from the left; this action makes PG into an LG-equivariant principal G-bundle over A S 1 , with quotient map g → A = g −1 · 0. The holonomy Hol : A S 1 → G of a connection may be defined in terms of the commutative diagram (51) where the left vertical map is given by q : g → (g(0), g (1)) and the lower horizontal map is (a 0 , a 1 ) → a −1 0 a 1 . The holonomy map has the equivariance property Hol(k.A) = Ad k(0) Hol(A) for k ∈ LG and A ∈ A S 1 . The generating vector fields for the action of Lg = Ω 0 H r+1 (S 1 , g) are again given by the covariant derivatives, the differential of Hol maps these to the generators for the conjugation action. We denote by π : PG → G, g → g(0) −1 g (1) the map defined by the commutative diagram; it is the quotient map for the G × L 0 Gaction (not to be confused with the quotient map for the LG-action). Lemma 6.1. The tangent fiber to PG at g has the following description The action T g PG → T gh −1 PG of elements h ∈ LG is given by ξ → Ad h ξ, while the action T g PG → T ag PG of elements a ∈ G is ξ → ξ. In term of this identification (53), and using left trivialization T G = G × g, the tangent map to the left vertical map in (51) is given by T g q : T g PG → g ⊕ g, ξ → q(ξ) := (ξ(0), ξ(1)). Proof. The tangent bundle of PG can itself be regarded as the total space of the path fibration for the tangent group T G: Using left trivialization to identify T G = G × g, the group structure reads as (a 1 , X 1 )(a 2 , X 2 ) = (a 1 a 2 , Ad a −1 1 X 1 + X 2 ), and (a, X) −1 = (a −1 , − Ad a X). Hence, the condition for a path t → (g(t), ξ(t)) to define an element of P(T G) is that be constant as a function of t. The last claim follows since the tangent map to q : PG → G × G is the corresponding map for P(T G) → T G × T G for the group T G. Regard PG as an LG-equivariant principal G-bundle over A S 1 , and let be the corresponding LG-equivariant Lie algebroid. Proposition 6.2. The fibers of the Lie algebroid R have the following description, The Lie bracket on sections of R is given by here L(a)ξ denotes the Lie derivative of the function ξ with respect to the vector field a, and [ξ 1 , ξ 2 ] is the pointwise Lie bracket. Proof. The subspace on the right hand side of (53) depends only on A = g −1 · 0; equation (54) gives a direct description in terms of A. (Recall that ∂ A = Ad g −1 • ∂ • Ad g .) The expression for the Lie bracket follows from a similar formula for the bracket on sections of T (PG). 6.2. Reduction by the L 0 G-action. The lift of the LG-action on A S 1 to the standard Courant algebroid TA S 1 has isotropic generators ̺ : Lg → Γ(TA S 1 ) given by the same formulas as for A I : By (52), the fiber of E = ̺(A S 1 × Lg) at A ∈ A S 1 may be regarded as the graph of the skew-adjoint operator ∂ A : Ω 0 (S 1 , g) → Ω 1 (S 1 , g). In particular, E is a Lagrangian subbundle, and since it is involutive it is a Dirac structure E ⊆ TA S 1 . Indeed, E is a weak Poisson structure, which we will again refer to as a Lie-Poisson structure on A S 1 . To describe its reduction with respect to the based loop group L 0 G, we extend (56) to sections of the Lie algebroid R: here we denote by ξ| I ∈ Ω 0 H −r (S 1 , g) the restriction to I ⊆ R, regarded as a piecewise continuous function on S 1 (with a jump singularity at 0) and by dA, ξ| I the corresponding element of T * A A S 1 . Lemma 6.3. For ξ 1 , ξ 2 ∈ Γ(R), the pairing of the corresponding sections is given by while the Courant bracket is Proof. We will write ξ ♯ = ξ A for the vector field defined by ξ ∈ Γ(R). The formula for the pairing follows from But d∂ A ξ 1 , ξ 2 = ∂ A dξ 1 , ξ 2 + [dA, ξ 1 ], ξ 2 . The first term combines with ∂ A ξ 2 , dξ 1 to give I ∂(ξ 2 · dξ 1 ), while the second term combines with dA, We are now in position to compute the reduction of the Lie-Poisson structure E ⊆ TA S 1 by the action of L 0 G. By definition, the reduced Courant algebroid is (C/C ⊥ )/L 0 G, where C is the coisotropic subbundle with fibers C A = (̺(L 0 g) A ) ⊥ . Theorem 6.4 (Reduction of the weak Poisson structure on A S 1 ). The reduction of the Dirac structure (TA S 1 , E) under the action of the based loop group L 0 G is canonically isomorphic to the Cartan-Dirac structure (A, E). In more detail, C is spanned by sections ̺(ξ) with ξ ∈ Γ(R), and the map descends to an isomorphism of Courant algebroids (TA S 1 ) red → A. Proof. An element a + dA, u with a ∈ T A A S 1 = Ω 1 Hr (S 1 , g) and u ∈ Ω 0 H −r (S 1 , g), lies in Equivalently, a − ∂ A u is a multiple of the δ-distribution supported at 0. In particular, u is given by a continuous function on I (regarded as a piecewise continuous function on S 1 with a jump discontinuity at 0). Given a ∈ T A A S 1 = Ω 1 Hr (S 1 , g), we can determine the corresponding u by integration. Furthermore, by lifting the differential equation to R, we see that u is the restriction to I of a function ξ ∈ Ω 0 H r+1 (R, g) satisfying ∂ A ξ = a (where A, a are regarded as periodic forms on R). In particular, ∂ A ξ is periodic, that is, ξ ∈ R A . This gives the desired identification of R A → C A , ξ → ̺(ξ) A . Since the kernel of the map R A → g ⊕ g, ξ → (Hol(A), ξ(0), ξ(1)) is exactly L 0 g, it follows that (TA S 1 ) red = G × (g ⊕ g) as a vector bundle. The metric and Courant bracket on (TA S 1 ) red are induced from the metric and Courant bracket on L 0 G-invariant sections of C; using the Lemma we obtain the metric and Courant bracket of the Cartan-Courant algebroid. Finally, since the L 0 G-invariant sections ̺(ξ) of E ⊆ R are those with ξ(0) = ξ(1), we see that E red is the Cartan-Dirac structure. Similar to A I , the fibration A S 1 → G has a standard connection, defined by any choice of a function χ ∈ C ∞ (I) such that χ extends to a smooth function on R, equal to 0 for t ≤ 0 and equal to 1 for t ≥ 1. The connection is best described in terms of the caloron correspondence, Appendix C. Arguing as in the case of A I , we obtain: Theorem 6.5. The reduction of the Dirac structure (TA S 1 , E) with respect to the based loop group L 0 G is G = LG/L 0 G-equivariantly isomorphic to the Cartan-Dirac structure (A, E) over G. Furthermore, the reduction of the L 0 G-basic splitting of TA S 1 , defined by the standard connection θ on the holonomy fibration, is the usual splitting of the Cartan-Courant algebroid, identifying A ∼ = TG η . The reduction procedure gives a one-to-one correspondence between LG-equivariant (exact) Hamiltonian spaces for (TA S 1 , E) and G-equivariant (exact) Hamiltonian spaces for (A, E). Appendix A. Reduction in infinite dimensions Let V be a Banach space. The closure of a subspace F ⊆ V will be denoted cl(F ), and the annihilator ann(F ) ⊆ V * , where V * is the topological dual space of V . For Banach spaces V, V ′ , denote by B(V, V ′ ) the Banach space of continuous linear maps V → V ′ . More generally, given Banach spaces V 1 , . . . , V l there is a Banach space B(V 1 , . . . , V l ; V ′ ) of continuous multilinear maps V 1 × · · · × V l → V ′ . Suppose V is a Hilbert space with a pseudo-Riemannian metric B. Let B ♭ : V → V * be the associated map. For any subspace F ⊆ V , we have B ♭ (F ⊥ ) = ann(F ), and (F ⊥ ) ⊥ = cl(F ). For the following Proposition, we observe that if F 1 , F 2 are closed subspace of a real Hilbert space V , then F 1 + F 2 is closed in V if and only if ann(F 1 ) + ann(F 2 ) is closed in N is a direct sum decomposition of V into closed subspaces. By considering the dual decomposition of V , it follows that the inclusion ann(F 1 ) + ann(F 2 ) → ann(F 1 ∩ F 2 ) is an equality.) Thus, if V carries a metric B, then F 1 + F 2 is closed if and only if F ⊥ 1 + F ⊥ 2 is closed. Criteria for F 1 + F 2 to be closed may be found in [33]; in particular, it is known that the sum of disjoint closed subspaces is closed if and only if a suitably defined 'angle' between these subspaces is non-zero. Proof. (a) Choose a closed complement F to C. Then F ⊥ is a closed complement to C ⊥ . The projection to C ⊥ along F ⊥ restricts to a continuous linear map A : F → C ⊥ , and The bilinear form B descends to a continuous symmetric bilinear form We have to verify that B C is non-degenerate. Let F be a closed isotropic Applying the projection C → V C , it follows that L ⊥ C ⊇ L C . If L + C ⊥ is closed, the inclusion becomes an equality, and we obtain L ⊥ C = L C . Appendix B. Lifting problems Let Q → B be a principal G-bundle, and 1 → U(1) → G → G → 1 a central extension. Consider the exact sequence of vector bundles over B, A splitting of this sequence may be regarded as a G-equivariant map ν : g → Ω 0 (Q, g) whose composition with the projection g → g is the identity. The differential of this map is scalar-valued, defining a linear map with values in closed 1-forms. The map ̺ : g → Γ(TQ), ξ → ξ Q + α(ξ) gives isotropic generators for the natural G-action on TQ. The standard splitting of TQ is not basic for this G-action. However, by Proposition 3.14 any principal connection θ on Q defines a new G-basic splitting of TQ, giving an identification (TQ) red = TB η for a closed 3-form η ∈ Ω 3 (B). The construction also gives a 2-form ̟ on Q with d̟ = −π * η. These are exactly the 2-form and 3-form appearing in Brylinski's discussion of the problem of lifting the structure group to G [8]. In particular, the cohomology class of η is the image in de Rham cohomology of the obstruction class in H 3 (B, Z) for the existence of a lift. Appendix C. Caloron correspondence The caloron correspondence, due to Garland-Murray [18], Murray-Stevenson [27], and Murray-Vozzo [28], relates principal bundles over a base B, with structure group the (based) loop group, with (framed) principal bundles over a base B × S 1 , with structure group G. Among other things, this correspondence leads to a simple construction of principal connections on the loop group bundle. C.1. Caloron correspondence for A I . In this section we will use a version of the caloron correspondence where we work with path spaces rather than loop spaces. A framing of a principal G-bundle Q → B along a submanifold Z ⊆ B is a trivialization along Z, i.e., a section σ : Z → Q| Z . A principal connection ν ∈ Ω 1 (Q, g) is a framed connection if σ * ν = 0. Given a manifold M with two submanifolds M 0 , M 1 , we say that / / B I Any principal connection ν ∈ Ω 1 (Q, g) determines a principal connection ν I on the bundle Q I . If ν is a framed connection, then ν I restricts to a principal connection on Q I,∂I . As a special case, take Q to be the trivial principal G-bundle Q = B×G over B = G×I, with the framings along B 0 = G × {0}, B 1 = G × {1} given by σ 0 (a, 0) = (a, 0, e), σ 1 (a, 1) = (a, 1, a), and with the principal G-action k.(a, s, g) = (a, s, gk −1 ). Consider the inclusion G → B I,∂I , taking a ∈ G to the path γ(t) = (a, t). The restriction of Q I,∂I to this submanifold G ⊆ B I,∂I is identified with G I,0 = {g ∈ G I |g(0) = e}, by the map G I,0 → Q I,∂I , g → t → (g(1), t, g(t)) . On the other hand, the map G I → A I , g → g −1 ·0 restricts to a diffeomorphism G I,0 ∼ = A I . In summary, we have a commutative diagram, To incorporate the G I -action on A I in this picture, note that the principal action of G on Q extends to an action of G × G × G: (u, v, k).(a, s, g) = (uav −1 , s, ugk −1 ). It defines a G I × G I × G I -action on Q I , given by the same formula (but with u, k, etc. as paths). The subbundle Q I,∂I is preserved by the subgroup of paths (u, v, k) such that u(0) = k(0) and v(1) = k(1), and the subbundle A I by the subgroup G I ⊆ G I × G I × G I of paths of the form (u, v, k)(t) = (k(0), k(1), k(t)). As explained above, a framed principal connection ν on Q defines a principal connection ν I on Q I , which then pulls back to a connection on Q I,∂I . Let θ denote its restriction to A I . If ν is furthermore invariant under the action of (u, v) ∈ G × G by automorphisms, then ν I will be invariant under the G I × G I -action. That is, the horizontal subbundle ker(ν I ) ⊆ T Q I is invariant not just under the gauge action, but under the full G I × G I × G I -action. It then follows that the connection θ is G I -equivariant, in the sense that the horizontal distribution ker(θ) is G I -invariant. To get concrete formulas, we express the principal connection ν on Q = B ×G in terms of its connection 1-forms κ ∈ Ω 1 (B, g): ν = Ad g −1 κ + g * θ L . Here the variable g is regarded as the projection g : B × G → G). The connection ν is a framed connection if and only if (59) i * 0 κ = 0, i * 1 κ = −a * θ R , where i s : G → B, a → (a, s). It is furthermore invariant under the G × G-action by automorphisms if and only if (60) (u, v) * κ = Ad u κ. Proposition C.1. Let ν be a framed connection on Q = B × G, defined by a connection 1-form κ ∈ Ω 1 (G × I, g). For t ∈ I, let κ t = i * t κ. Let A ∈ A I , defining a parallel transport g ∈ G I,0 . Then the horizontal lift for the resulting connection 1-form θ is given at A ∈ A I by T Hol(A) G → T A A I , X → ∂ A ξ where ξ ∈ g I is the path ξ(t) = − Ad g(t) −1 κ t (X). The proposition asserts that the horizontal lift of X is given by ∂ A ξ ∈ T A A I , the infinitesimal action of ξ ∈ g I on A I . The image of ∂ A ξ under the differential of the map A I → (G×I×G) I is the infinitesimal action of (ξ(0), ξ(1), ξ) ∈ g I ×g I ×g I at (Hol(A), 0, g), that is, (61) ξ(1) L | Hol(A) , 0, ξ G I | g = (X, 0, ξ G I | g . On the other hand, the image of X ∈ T Hol(A) G under the differential of G → (G × I) I is the constant vector field (X, 0) ∈ T B I , and by the formula for ν I in terms of the connection 1-form, (61) is precisely the horizontal lift of (X, 0). C.2. Caloron correspondence for A S 1 . The caloron correspondence for A S 1 runs as follows (see [28,Example 3.4]). Consider the trivial principal G-bundle Q = G × R × G, with the principal action of x ∈ G given as x · (a, s, y) = (a, s, yx −1 ), for a, y ∈ G and s ∈ R. The group of integers Z acts by principal bundle automorphisms, n · (a, s, y) = (a, s + n, a n y); the quotient is a principal bundle Taking loops of Sobolev class r + 1, we obtain a G-equivariant principal LG-bundle LQ → L(G × S 1 ), containing the bundle of quasi-periodic paths PG as a G-equivariant subbundle: PG Here the lower horizontal map takes a ∈ G to the loop s → (a, s), while the upper horizontal map takes g ∈ PG to the loop, s → [(π(g), s, g(s))]. Similarly, working with framed loops we obtain a diagram Given a principal connection ν ∈ Ω 1 (Q, g) on the bundle Q → G × S 1 , the loop functor determines a connection on LQ → L(G × S 1 ), which then pulls back to a connection θ on the principal LG-bundle PG → G. Furthermore, if ν is a framed connection, then the resulting connection on LQ → LG restricts to a connection on L 0 Q, and hence θ reduces to a connection on A S 1 ∼ = P 0 G → G. To describe framed connections on Q, we use the canonical trivialization of its pullback under the map G×I → G×S 1 , (a, s) → (a, [s]). A sufficient condition for χ ∈ C ∞ (I) with χ(0) = 0, χ(1) = 1 to define a connection on Q, is that χ extends to a smooth function on R, equal to 0 for for s ≤ 0 and equal to 1 for s ≥ 1. The resulting connection θ on the loop group bundle PG → G is again referred to as a standard connection. Connections of this type were used by Carey-Mickelsson [12]. While θ depends on the choice of χ, the resulting 2-form ̟ ∈ Ω 2 (A S 1 ) is independent of that choice [4]; it is the pullback of the corresponding 2-form on A = A I .
2015-08-25T14:42:33.000Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "46fd5fa9155d22d7be7daddfefd2ec8344821d4c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1508.06168", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46fd5fa9155d22d7be7daddfefd2ec8344821d4c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
258923887
pes2o/s2orc
v3-fos-license
Ayurvedic management of venous ulcer - a case report Vrana (ulcer), in Ayurveda is defined as a structural deformity in the skin and deeper structures (gaatra avachurnana), associated with ruja (pain), srava (discharge) etc and caused either by the vitiation of the doshas(humuors of the body) or by trauma. Vrana is basically of 2 types- Dushta vrana and Shudha vrana. Shudha Vrana (acute ulcer) is easily treatable, whereas Dushta vrana is a chronic ulcer, mostly unresponsive to any treatment. Acharya Sushruta has described sixty methods for treating such vranas (ulcers). In this case, symptoms like Deerghakalaanubandhi (chronic), Teevra ruja (painful), Teevra puti srava (smelly discharge) etc. were suggestive of Pitta pradhana Sarakta Tridoshaja Dushta vrana on the left leg. Studies from India about the prevalence of venous leg ulcers (VLU) are limited. The chronic wound management strategies include compression therapy and antimicrobial therapy (if infected). However, in unresponsive cases, surgery (skin grafting) is done. A 38-year-old non-diabetic, non-hypertensive female sought Ayurvedic treatment after a wound on her left leg did not respond to the conventional medicines even after 7 months of treatment. The ulcer was painful and foul-smelling, to the extent of disturbing her sleep and restricting her daily activities. Her Ayurvedic treatment comprised of Patoladi kashaya, Kaishora guggulu, Guduchyadi kashaya, Manjishtadi kashaya and Avipathi churna, orally and Vrana prakshalana (wound cleaning) and Vrana lepa (application of medicinal paste) externally. Ayurvedic treatment was effective in healing the Dushta vrana completely in this case. This suggests the efficacy of Ayurveda in the management of chronic ulcers. However, a detailed study of the same with larger sample sizes will help to formulate a treatment protocol for such cases. Introduction Dushta vrana, according to Acharya Sushruta, is a chronic ulcer, manifested in any part of the body, caused either by the doshas or trauma. When caused due to the doshas, it is called Nija vrana and when caused because of trauma; it is called Agantuja vrana. The nija vrana exhibits signs & symptoms in accordance with the Dosa affected [1]. Sushruta Samhita gives a detailed description of the various features of a Dushta vrana. The one which is Atisamvrita (excessively covered), Ativivrita (excessively uncovered), Atikathina (too hard), Atimrudu (too soft), Utsanna (excessively elevated), Avasanna (excessively depressed), Atyushna (calor), Atisheeta (cold to touch), differently coloured, ugly looking, suppurative, painful, associated with different types of discharges and which is chronic; is called a Dushta vrana [1]. Based on these symptoms, the patient was diagnosed with Pitta pradhana Sarakta Tridoshaja Dushta vrana on her left leg. Venous leg ulcer (VLU) is the most severe presentation of chronic venous insufficiency. They have considerable impacts on patients; increased pain, impaired sleep, and reduced mobility are common, while socializing is avoided to reduce the risk of injury, and work capacity is impaired. Patients report feelings of powerlessness and hopelessness. Participants in VLU trials also report much reduced health-related quality of life at baseline compared with population norms [2]. The standard line of treatment followed by the practitioners of contemporary science, in such cases is wound dressing with antimicrobial drugs (if infected), compression therapy, anti inflammatory therapy and surgery. A study conducted by Jones et al. concluded that the chronic wound care practices are inconsistent with the evidence based recommendations for wound management [3]. In this case, chronicity which was of 7 months and the worsening of the condition over this time despite continued medication indicated that the wound was nonresponsive to the treatment. Moreover, due to low financial stability, she could not afford to continue the expensive treatment and hence chose Ayurveda. Patient information A 38-year-old, non-diabetic, non-hypertensive female, home maker, complained of a non-healing wound on her left leg for 7 months. She had consulted three surgeons in the past who treated her with various medicines, but in vain. The details of the medicines prescribed are not known. She experienced severe pain in her leg, to the extent of disturbing her sleep. There was a severe watery discharge from the wound, making it difficult for her to walk. This was the first occurrence of such an ulcer on the leg, and the patient did not have any family history of the same. Clinical findings The patient was examined thoroughly. Bowel, appetite and micturition were normal. She had disturbed sleep due to pain. Her review of systems and vital signs were within normal limits.An oval-shaped ulcer was present on the lower lateral aspect of the left leg just above the lateral malleolus. Diagnostic assessment The diagnosis was made based on the clinical findings. An oval shaped ulcer which was approximately 6 cm x 3.3 cm x 0.7 cm in size was present on the lower lateral aspect of the left leg, just above the lateral malleolus. It had sloping edges and the floor was covered with thick red granulation tissue. There was serous purulent discharge from it and the surrounding area was eczematous and pigmented. A few varicose veins were present in the area below the ulcer. Varicosity on the left calf region tested positive for Trendelenburg test and negative for Mose's sign. A palpable pedal pulsation confirmed it to be a varicose ulcer and differentiated it from a deep vein thrombotic ulcer. Doppler study confirmed the absence of DVT. Fig. 1a shows the photograph of the ulcer before the treatment. Once the pain and discharge reduced, Manjishtadi kashaya [4] in a dose of 15 ml þ 45 ml of warm water-in the evening and Vrana lepa (application of medicated paste on the ulcer) with Vrana ropaka churna were added to attain Rakta prasadana (purify Rakta humuor) and Vrana ropana (wound healing). Table 1 mentions the successive order of oral medicines along with their effects. Vrana lepa (application of medicated paste on the ulcer) with Vrana ropaka churna-twice daily. Table 2 mentions the successive order of treatments with their effects. Diet and regimen: Diet and regimen play a very important role to abet the effect of treatments. Here, the patient was advised to follow a diet and regimen which would help to balance Pitta, Rakta and Vata doshas. The patient was asked to avoid spicy, sour, oily, fermented, and refrigerated food items. She was advised to avoid sun exposure, sleeping in the day and staying late at night. Follow up and outcomes After the initial 15 days of treatment, the ulcer started to heal. The pain & discharge got reduced and the patient got the confidence to continue the treatment. Gradually, the ulcer showed more signs of healing and at the end of 60 days; it had healed completely. Fig. 1b, c show the photographs of the ulcer on the 30th and 60th days of treatment. A follow up after 6 months confirmed the non-recurrence of the ulcer (Fig. 1d). Table 3 timeline of events. Discussion The patient had consulted three surgeons and had taken many courses of antibiotics and anti-inflammatory drugs for the past 7 months, however, the ulcer did not heal. This made her mentally weak and annihilated. This was a major limitation in the case. The patient being non-diabetic and non-hypertensive, and strictly adhered to all the diet regimen and timely medicine intake as instructed. Incompetence of the valves of the superficial and deep veins of the leg result in venous hypertension. Fibrin gets excessively deposited around the capillary beds leading to elevated intravascular pressure. This fibrin decreases the oxygen permeability by 20fold leading to tissue hypoxia causing impaired wound healing. Various inflammatory cells get trapped in the fibrin and promote severe uncontrolled inflammation, preventing proper regeneration of the wound [6]. Conclusion The chronic venous ulcer which had not healed for 7 months despite many courses of antibiotics and anti-inflammatory therapy, healed in 60 days with Ayurvedic intervention. This suggests the efficacy of Ayurvedic therapy in the healing of chronic ulcers. Nonrecurrence of the ulcer even after 6-months of the stoppage of medicines indicates the complete reversal of pathology in the venous level itself. However, a detailed study of the same with larger sample sizes will help to formulate a treatment protocol for such cases. Patient perspective "I had pain in both my legs since 2 years. Since I do a job standing for a long time, I thought it is because of that. I would also get swelling in my legs especially, in my left leg often, with the prominence of veins. Gradually, I developed itching on my left leg which later became a wound. Initially, I neglected it but, when it did not heal, consulted a surgeon. He prescribed few medicines, but the wound remained the same. The condition got worsened and I was unable to sleep or walk. Latter, I consulted two more surgeons, but in vain. Then I decided to try out Ayurvedic treatment. The doctor prescribed me Kashayas, Churna, Tablets, a Kashaya to wash the wound and a lepa to be applied. He gave me a diet regimen to be followed. In the first 15 days itself the pain and discharge from the wound reduced and it increased my confidence. I continued to follow all the instructions by the doctor and at the wound gradually healed. I am thankful to god and to the doctor". Informed consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images are made available for verification by the editor of the journal. Author contribution Both the authors made equal contribution in treating the case, documenting it and structuring the manuscript. Sources of funding None declared. Declaration of competing interest None declared.
2023-05-27T15:08:16.318Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "d3b65bd8427e65419d781a525649933a1be0cbae", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "731d1859076e92c7a0106055eb6043a1508b54a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
155724998
pes2o/s2orc
v3-fos-license
Evaluation of Resistance of US Rice Breeding Lines to the Rice Blast Pathogen Rice blast, caused by the fungus Magnaporthe oryzae (anamorph: Pyricularia oryzae ), is a ubiquitous disease that threatens rice production in the USA and worldwide. Growing resistant cultivars is the most economical and effective way to manage this disease. Multiple races exist in the M. oryzae population in the USA. It is necessary to know the resistance spectrum of rice cultivars to the prevalent rice blast races in the areas where they are grown. Twelve isolates of M. oryzae collected from the southern US rice-growing region were used in this study. The genetic diversity of these isolates was evaluated with genetic and molecular methods, and the pathogenicity to different rice blast resistance genes was determined by the disease reaction of two sets of near-isogenic lines containing one blast R gene per line. From 2005 to 2016, about 200 Uniform Regional Rice Nursery (URRN) breeding lines have been tested with 9–12 reference isolates annually, and a total of 2377 breeding lines have been tested. The varieties with good resistance to rice blast disease have been identified. The results could be useful for the management of rice blast disease in the southern US rice production area. Introduction Rice is one of the most important staple food crops worldwide, feeding over half of the world's population [1]. The demand for rice continues to increase with the increase in the global population. The USA grows approximately 1.5 million hectares of rice annually and produces about 8-11 million metric tons of rice valued at 3.6 billion dollars (Figure 1) [2] . Although the USA is a relatively small rice producer accounting for less than 2% of the total rice production worldwide, it is a major rice exporter that occupies 6-13%, with an average of 10%, of the world rice export market (Figure 1), making the USA one of the top rice exporters in the world [3]. Rice blast disease, caused by the fungus Magnaporthe oryzae (anamorph: Pyricularia oryzae), is one of the most important diseases on rice worldwide and is responsible for approximately 30% of rice production losses globally [1,4]. A wide range of management practices have been used to reduce losses from rice blast. For example, cultural practices such as crop rotation, controlling the timing and amount of nitrogen applied, and managing the flood depth in the field may reduce the impact of blast [5]. A number of fungicides also are effective in managing rice blast disease [4]; however, it is not a preferred management option due to environmental concerns and cost. Growing resistant cultivars is the most economical and effective way to manage this disease [4,6]. Many rice blast R genes have been characterized, some of which have been widely used in rice breeding programs worldwide [6][7][8]. The R genes recognize the corresponding specific avirulence genes from the pathogen and initiate defense mechanism [9]. For example, the R gene Pita can interact with the counterpart AVR-Pita from the pathogen and confer resistance [10]. However, the changes in avirulence genes can result in the loss of function of the corresponding R genes. For example, the R gene Pita has deployed in rice cultivars in the southern USA and provided durable resistance for a long period of time [11], but the resistance of the Pita gene was overcome by race IE1k in 2004 [12]. The population of M. oryzae in the southern USA has been intensively studied [13][14][15][16][17][18]. Multiple races exist in the M. oryzae population in the USA. For example, race IB49 and IC17 were the most prevalent races in Arkansas [13][14][15], with occasional epidemics due to race IE1k or "race K" type isolates [12]. Near-isogenic lines, each containing a targeted blast resistant gene, in either a Japonica-type variety Lijiangxingtuanheigu (LTH) background [19] or Indica-type CO39 background [20], have been used for race identification in Asia [21]. In the USA, the M. oryzae population has been intensively studied [13][14][15][16][17][18]22], but the relationship between races to individual rice blast R genes in the USA is largely unknown [22]. In addition, it is necessary to evaluate the resistance spectrum of newly developed rice breeding lines to the prevalent rice blast races in the southern US rice-growing region before they are released. Genetic diversity of the 12 reference isolates The genetic diversity of the 12 reference isolates was evaluated by vegetative compatibility analysis [13] and molecular methods. Vegetative compatibility analysis indicated that three isolates A598, ZN15, and ZN46 belonged to vegetative compatibility group (VCG) US-01; isolates TM2, #24, and A264 belonged to VCG US-02; two isolates 49D and A119 belonged to VCG US-03; and other two isolates IB33 and IB54 belonged to VCG US-04 ( Table 1). The VCG of isolate ID13 was not determined. Using Pot 2 primers [23], the repetitive element-based polymerase chain reaction (Rep-PCR) was used to DNA fingerprint the 12 reference isolates. The amplicon patterns of 49D, IB33, and IB54 based on Pot 2 primers were identical; TM2 and ZN7 were identical to each other; isolate 24 and A264 were identical to each other, but they had one extra band compared to that of TM2 and ZN7; ZN15 and ZN46 had similar patterns (Figure 2). The mating types of these isolates were determined by using mating-type-specific primers [24]. The results suggested that six isolates, 49D, (Figure 3). Seven avirulence genes were assessed using specific primers to each gene ( Table 2) [25][26][27][28]. The entire AVR-Pita fragment could be amplified from nine isolates with primers YL149/YL169, but not from isolates TM2, IB33, and ID1. The coding regions of the avirulence gene AVR-Pib was found in all 12 reference isolates (amplified with the AVR-Pib F3/R3 primers); however, the promoter region of the AVR-Pib gene (amplified with the AVR-Pib F2/R2 primers) was not found in isolates 49D, IB33, and IB54. The avirulence gene AVR-Pikm was only found in four isolates, 49D, IB33, IB54, and ID13. The other four avirulence genes, AVR-CO39, AVR-Pi9, AVR-Pikz, and AVR-Piz-t, were present in all 12 reference isolates (Figure 4). IRRI rice blast near-isogenic lines The 12 US reference isolates were tested on 31 LTH NILs (containing 24 blast R genes) and 20 CO39 NILs (containing 14 R genes) in three independent tests, with two replications in each test. Two cultivars, M204 and Francis, were included as susceptible controls. Inoculation of blast pathogen and disease screening Rice seed was planted in plastic trays filled with river sand mixed with potting soil in the greenhouse at the University of Arkansas, Fayetteville, AR, USA. Iron sulfate was applied to the newly emerged seedlings. The plants were fertilized with Miracle-Gro All-Purpose Plant Food 20-20-20 once a week during each test. Plants were inoculated approximately 14-20 days after planting. Each isolate was grown on rice bran agar (RBA) [13] for approximately 7-10 days and then reinoculated on new RBA plates for 7-10 days. Spores were collected in cold water and adjusted to a concentration of 200,000 spores/ml per isolate. Each tray was inoculated with 50 ml of inoculum mixed with 0.02% Tween 20 with an air compressor sprayer. After Target gene Primer name Sequences inoculation, the plants were incubated at 100% relative humidity in a mist chamber at approximately 22°C for 24 h, allowed to dry for 2-3 h before being moved to the greenhouse. The inoculated plants were incubated in the greenhouse for 6 days. On the 7th day after inoculation, 15-20 plants of each line were scored according to a standard 0-9 disease rating scale developed by IRRI [22]. Lines rated 0 to 3 were considered resistant, whereas those rated 4-9 were considered susceptible. ID Thus, NILs containing Pia, Pi3, Pi19(t), and Pi12(t) were not useful for differentiating races of the US reference isolates tested. Resistance loci Pi9(t), Pi12(t), Pib, Pi11(t), and Pita-2 were the most effective R genes to the panel of US reference isolates evaluated and could be exploited to improve resistance to rice blast disease in the USA. Uniform Regional Rice Nursery (URRN) breeding lines About 200 rice breeding lines, developed by the rice breeders from Arkansas, Louisiana, Mississippi, and Texas, were subjected to annual disease evaluations to the reference blast isolates at the University of Arkansas, Fayetteville, AR, in addition to the evaluation of yield and agronomic traits at various locations. A total of over 2000 breeding lines were tested during 2005-2016. The rice cultivars M204 and Francis were included in each test as the susceptible controls. The inoculation and disease scoring procedures were as described previously. Pathogenicity of the reference isolates on the URRN lines The susceptible control Francis was susceptible to all 12 isolates, while M204 was susceptible to 11 isolates but resistant to isolate IB54. Each year, in each test, the two susceptible controls consistently showed susceptible disease reactions with the disease rating scores ranging from 4 to 9, respectively. The percentage of breeding lines resistant to each isolate in each year was quantified ( Figure 5). The isolate IB33 was the most virulent isolate out of the 12 reference isolates tested, with 71.2-98% of the lines evaluated as susceptible for the 11 years tested. Overall 1963 out of 2177 lines tested (90.2%) were susceptible to IB33. Isolate 49D (race IB49) was highly virulent, with 70-90% of the lines tested susceptible for 10 of the 12 years examined. In 2007, however, 21.5% in 2007 and 48.7% in 2016 were evaluated as susceptible. Out of the 2377 lines tested in 12 years, 1673 lines (70.4%) were susceptible to 49D. Isolate TM2 (IE1k) was also considered highly virulent. In 2014 and 2015, about 75% of the lines were susceptible to TM2. In other years, over than 50% of the lines were susceptible. The lines tested in 2007 had the lowest percentage (33.5%) of susceptibility to this isolate. Overall, 1361 out of 2377 breeding lines (57.3%) were susceptible to TM2. Three isolates ID13, IB54, and #24 (IG1) were the least virulent; the percentages of susceptible breeding lines ranged between 7.5-26.7, 13.5-27.5, and 10.5-28.3%, respectively. Overall, the percentages of susceptible breeding lines to these three isolates were 18.7, 21.0, and 19.2%. The other six isolates were intermediately virulence on the lines tested with 40 to 50% of breeding lines were susceptible. In 2006, over 80% of the breeding lines were susceptible to isolate A119 (race IB49), but in the following years, only 25 to 50% of lines were susceptible to this isolate. In 12 years, 970 out of 2377 breeding lines (40.8%) were susceptible to A119. Disease reaction of US rice breeding lines to the 12 reference isolates All 12 reference isolates have been tested in 2010-2013 and 2016. In these 5 years, there were 45 lines that were rated as completely resistant to all isolates, and 101 lines only susceptible to one isolate. In 2010, there were 10 lines resistant to all isolates, 11 lines only susceptible to IB33, and 1 each only susceptible to TM2 and IB54. A total of 20 lines had no resistance to the 12 isolates. There were 14 lines from the 2011 set of germplasm that were resistant to all 12 isolates, 11 lines only susceptible to IB33, 1 only susceptible to TM2, and 2 only susceptible to 49D, and 8 lines susceptible to all 12 isolates. Five lines tested in 2012 were resistant to all 12 isolates, 12 lines were only infected by IB33, 1 and 3 lines were only susceptible to ID13 or 49D, respectively, while 7 lines were susceptible to all 12 lines. In 2013, 14 lines were evaluated as resistant to all isolates, 13 lines were only susceptible to IB33, 1 each only susceptible to ID13 or ZN7, 4 each only susceptible to 49D or TM2, while 5 lines were susceptible to all 12 isolates. Out of the 200 URRN lines tested in 2016, only 2 lines were resistant to all 12 isolates, 1 line only susceptible to TM2, 2 lines only susceptible to 49D, 32 lines were only susceptible to IB33, and 4 lines were susceptible to all 12 lines. In 2006 and 2008, 11 isolates were tested, but not IB54, 2 and 6 lines were resistant, and 28 and 13 lines were susceptible to all 11 isolates, respectively. In 2009, 2014, and 2015, isolate ID13 was not tested, but other 11 isolates were. There were 3, 1, and 4 lines resistant to and 11, 14 and 9 lines susceptible to all 11 isolates. In 2005, both IB54 and ID13 were not tested. No variety was found to be resistant to all 10 isolates tested. There were nine lines only susceptible to one isolate, six of them were susceptible to isolate IB33, and one each susceptible to 49D, TM2, and A598. There were 19 lines susceptible to all 10 isolates. Nine isolates were tested in 2007, but not IB33, IB54, and ID13. There were 60 lines resistant and 4 lines susceptible to all 9 isolates tested in 2007. Discussion Growing resistant cultivars has been demonstrated to be the most economical and effective way to manage rice blast disease. During 2005 to 2016, 2377 breeding lines were evaluated for disease resistance to the 12 reference isolates. Breeding lines resistant to all isolates have been found in each year of the period except 2005. Some lines were only susceptible to the most virulent isolate IB33. The use of the lines that have the broadest level of resistance to the spectrum of reference isolates would reduce the loss due to rice blast disease. Based on the international differential cultivars and nomenclature, isolates A119, A598, and 49D are classified as race IB49 [18]. The disease reactions of many NILs tested to these three isolates were identical. However, these three isolates can be differentiated by some NILs (R genes), as Pib, Pi11(t), and Pi20 containing lines were resistant to A119 and A598 but susceptible to 49D; Pi5(t) and Pit containing lines were resistant to A119 but susceptible to 49D and A598. These results indicated that a set of differential cultivars should be chosen to more clearly demarcate races within the US rice blast pathogen population. Any mutation, insertion, or deletion of the avirulence genes in the pathogen could cause the changes in its pathogenicity, thus resulting in the loss of function of the corresponding R gene and disease development. The coding region of AVR-Pib was found in all 12 reference isolates, but the promoter region was not amplified from isolates 49D, IB33, and IB54, and this may explain why the Pib gene containing line IRBLB-B cannot provide resistance to these three isolates. Some of the avirulence genes in the US population of M. oryzae have been studied [17,18,25]. However, the variation of other avirulence genes in the US population of M. oryzae needs to be evaluated. Specific primers were used to detect the presence/absence of seven avirulence genes. Three avirulence genes, AVR1-CO39, AVR-Pi9, AVR-Pikz, AVR-Pizt, were present in all 12 reference isolates. According to the gene-for-gene concept [9], the corresponding R genes Pi-CO39 line, Pi9, Pikz, and Piz-t would interact with these avirulence genes and initiate the defense response. It is unknown how many avirulence gene/R gene pairs could be involved in the resistance recognition process. When AVR-Pita1 was introduced into strains that were virulent on Pita containing cultivars, those transformed strains lost their pathogenicity on Pita containing cultivars [30], suggesting that one R gene recognized one corresponding avirulence gene to initiate the resistance response. If this is the case, then cultivar CO39 and lines carrying Pi9, Pikz, and Piz-t, would have broad-spectrum resistance to the US isolates. It has been shown that Pi9 containing line IRBL 9-W had resistance to all 12 reference isolates, but this is in contrast to the results of the NILs carrying Pi-CO39, Pikz, and Piz-t based on the reference isolates. If these avirulence genes in the reference isolates are functional, then the critical avirulence gene or combination of avirulence genes needs to be further evaluated for managing the disease. A number of R genes to the blast pathogen disease have been identified from rice [4,[6][7][8]. Although more than 20 R genes were incorporated into the NILs, only Pi9, Pi11(t), Pi12(t), Pib, and Pita-2 showed broad spectrum of resistance to the reference isolates of M. oryzae found in the southern USA. The R gene Pita-2 has been widely used in US rice breeding programs, and has been effective, but incorporation of other R genes to develop more durable resistant cultivars will help to reduce the impact of rice blast disease. Conclusions The population of M. oryzae in the southern USA is very diverse. Breeding lines with broad spectrum of resistance to the reference isolates have been developed, © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Author details Chunda Feng and James C. Correll University of Arkansas, Fayetteville, AR, USA *Address all correspondence to: cfeng@uark.edu and incorporation of other R genes to develop more durable resistant cultivars will help to reduce the impact of rice blast disease.
2019-05-17T13:46:37.386Z
2019-03-15T00:00:00.000
{ "year": 2019, "sha1": "ef035d1ec3fc2d9f1b17718888448e52ae182231", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/66076", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "345fe60ac8f4dc9d04c55e92ee92aab689be643a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
134997185
pes2o/s2orc
v3-fos-license
River Water Pollution Control Strategy Due to Coal Mining Activities ( Case Study in Kungkilan River West Merapi District , Lahat ) Kungkilan River is under the administration of West Merapi Sub-district, Lahat, which is potential to degrade its water quality, resulted from the activity of coal mining. This research is aimed to analyze the quality of water in Kungkilan River in every segment of coal mining campany and proposing a recommendation of management strategy of river pollution. This research applied the descriptive analysis with the quantitative approach using sample survey method. The analysis of the recommendation of management strategy of river pollution was conducted through SWOT method. The sample of water in Kungkilan River was collected from 5 stations. The sample of waste water was collected from 7 spots of Sludge Sedimentation Pond of the coal mining campany. It was collected during the dry and rainy season. It can be concluded that the quality of water in Kungkilan River has been degrading right after streaming through the area of coal mining. In the dry season, each segment meets the calculation of the capacity of water pollution for TSS parameter, while in the rainy season, segment IV exceeds the capacity. In the rainy season, the quality of water in Kungkilan River in the station S-02, S-03 and S-04 encounters a self-purification process, while in Station S-05 is in the condition of moderate pollution. Based on hose finding, it is recommended that the management strategy of Kungkilan River pollution can be conducted through having research on the determination of water classification and the capacity of water pollution in Kungkilan River and supervising the quality of water periodically and continually; improving the frequency of supervision by PPLH/PPLHD personnel and functionally; moving the water disposal canal to other spots and conducting vegetation; regulating law administratively, both civil and criminal law, to the companies violating the regulation of water pollution management; and benefitting the CSR program of the company. Introduction Lahat coal mining is generally conducted with an open pit mining system.Open pit mining system held in the soil surface will change the landscape and the ecological balance of the soil surface and water bodies.Open coal mining activities are commonly carried out by transfer the material from one location to another, so it will produce emergence of hills or quite deep valleys and destruction of small rivers in the vicinity [16]. Coal mining with open system is a mining activity that has important implications for environmental baseline [19].The main issue of coal mining activity is the formation of acid mine drainage (AMD).Acid Mine Drainage (AMD) or in the coal mining industry is also called the Coal Mine Drainage (CMD) is an acidic water formed where the reaction between water, oxygen, and rocks containing sulphide minerals as a result of open and closed pit mining activities [10]. Sources of AMD from mining activities, i.e. manufacture of mining roads activities, opening of overburden, mining operations either underground mining or open pit mining, the landfill of overburden (waste dump / disposal area), the location of the storage / accumulation of coal (stockpile), and tailings disposal sites [3].AMD within in the mine pit surface, middle and bottom/sediment have physical and chemical characteristics, such as the parameters of pH, Fe, Mn, and TSS [5].The probability of acid mine drainage in the rock pile/overburden is greater, it is because the rocks contain open sulfide elements [6].At the coal mining site, the most common sulphide minerals found are pyrite and markasit (FeS2) which it present at the coal seams, overburden, and interburden is very significant [11].AMD in coal mining environments that have low pH and high sulfate characteristics, will causing the dissolution of heavy metal elements [15].This condition is endangering the lives of aquatic biota include plankton, benthos, fish, plants, and eventually disrupt the human health [3]. Kungkilan River is a place of collecting, storing and draining waste water and /or runoff water that contaminated from coal mining activities in the surroundings.There are four active coal mining activities located around the Kungkilan River.The active coal mining activities will provide an enormous contribution as a source of pollution for the Kungkilan river.Therefore, in-depth research about water quality of Kungkilan River is needed in order to obtain water pollution control strategy in the Kungkilan River due to coal mining activities. The purpose of this study is to analyze the waste water quality and runoff from coal mining activities and water quality of Kungkilan River, estimating the pollution load capacity and determine the status of Kungkilan river water quality, and setting the strategy of pollution control of the Kungkilan River. Experimental Sections The research method used in this study was descriptive analysis with quantitative approach based on the condition of river water quality and recommendation analysis of water pollution control strategy was carried out with a SWOT analysis (Strength, Weakness, Opportunity, and Threat).This research was conducted in September and December 2016.The length of the river Kungkilan as a test site along the ± 12.3 km in the District of West Merpai, Lahat. Sampling location The sampling method is done by setting the sampling stations of water called sample survey method.Determination of sampling points of Kungkilan River water and wastewater is based on the consideration of the ease of access, cost and time, so that the dots are determined to be the representative quality of waste water and/or water runoff contaminated with coal mining company's activities and Kungkilan River water quality.Sampling sites in the waste water outlet (KPL) and water Kungkilan River in the District of West Merapi, Lahat in this study (Figure 1), as follows: Water sampling point on the Kungkilan river amounted to 5 points.For sampling of wastewater at MPA outlets and / or runoff water contaminated with coal mining activities by 10 points, but due to MPA repair and maintenance activities, removal or change of MPAs, as well as dry MPA conditions, the sampling of wastewater and / or water Runoff contaminated with coal mining activities in MPA outlets is done by 8 and 7 points. Collecting Research Data and Sample Analysis Methods of research data collection conducted by observation, interview, documentation, and literature study, as presented in Table 2. Table 2. Sample collection method Wastewater and water samples of Kungkilan River are analyzed both in the field (insitu) and in the laboratory (exsitu), as presented in Table 3.To estimate the pollution load capacity of river water using mass balance method.The average concentration in the final stream after the flow mixes with the pollutant source with the calculation: Where: CR: The average concentration of constituents for combined flow; Ci: The Where: Lij: The concentration of water quality parameters listed in water quality standard (J); Ci: Concentration of water quality parameters in the field; Pij: pollution index for designation (J); (Ci / Lij) M: Value, maximum Ci / Lij; (Ci / Lij) R: Value, Ci / Lij average.Evaluation of pollution index (PI) or Pollution Index (PI) consists of 4 values of water quality status ie 0 ≤ PIj ≤ 1.0 means good water conditions, <PIj ≤ 5.0 means lightly contiguous, 5.0 < PIj ≤ 10 means moderately polluted, and PIj> 10 means the status of the waters is heavily polluted. The recommendation of Pollution Control of River Kungkilan was analyzed by using SWOT Analysis. SWOT analysis is one of the planning models.SWOT analysis can be used as a basis for designing work strategies and programs.The stages of SWOT matrix formulation are SWOT factor selection, SWOT factor rating determination, SWOT factor weight determination [17].The identification of internal and external factors in the SWOT analysis is used to formulate management strategies [15]. Comparative Analysis of Wastewater Quality with BMAL Coal Mining Company The results of wastewater quality analysis and debit measurements at MPA outlets representing dry and rainy seasons can be seen in Table 4 and Table 5 below.In the dry and wet seasons the characteristic quality of wastewater and / or runoff water in MPA outlets for each parameter has no significant difference, but that distinguishes the flow of waste water and / or runoff water.For TSS parameters as a whole still meet BMAL, but pH, Fe, and Mn parameters in some MPA outlets do not meet or have exceeded BMAL.The presence of streams of sewage and / or runoff water contaminated with coal mining activities is highly dependent on rainfall and the presence of pumping activities.For coal stockpile areas, mine roads and disposal of waste water areas in the form of rainwater runoff contaminated with activity in the area, while the waste water pit comes from rain pumping or ground water activities in the pit. Acidity in the coal mine area is estimated because coal mines leave sulfur-dominated acidic acid and reinforced with very large rainfall and far exceeding evapotranspiration, causing soil to become eroded and heavily leached.The formation of acidic water causes a decrease in the value of pH which is capable of dissolving and carrying heavy metals contained in rocks traversed by the acidic water stream [1]. Water Quality Analysis of Kungkilan River A. The results of water quality analysis of River Kungkilan compared with river water quality standard Based on the Governor Regulation of South Sumatera Number 16 Year 2005, Kungkilan River water is set in Class I as a designation of clean water raw water.Comparison of data result of water quality analysis of River Kungkilan with river water quality standard, as follows: 1) Total Suspended Solid Parameters (TSS) TSS from coal mining activities can come from roads, workshops, offices, overburden, top soil, and mining areas.TSS caused by surface run off [18].Based on the result of water quality analysis of Kungkilan River for TSS parameter in dry season still in each station observation point still fulfill the standard of river water quality that is 3,42 -30,40 mg / l with the standard value of river water 50 mg / l.In the rainy season, the value of TSS has increased considerably, especially in the downstream part of the River Kungkilan.For TSS values at S-01, S-02 and S-03 stations still fulfill the river water quality standard of 3.19 -5.30 mg / l, due to disposal of land cover has been trimmed and location of company activity Coal mining crossed by the Kungkilan River has not operated or stopped temporarily.The quality of Kungkilan River water after crossing the S-04 and S-05 stations has a very high increase, as in the third segment (between S-03 and S-04) and segment IV (between S-04 and S-05) Location of activities of active coal mining companies and the presence of runoff wastewater and / or running water without processing and or bypassing.The value of TSS in S-04 and S-05 stations is 122,67 mg / l and 373,67 mg / l, which means that it has exceeded the standard of river water quality which is 50 mg / l.High TSS values can cause siltation of the body resulting in the formation of land and growing by water plants that progressively cause water bodies to die (eutrophication) [2]. 2) Degree of acidity (pH) The results of KKilan River water quality analysis for pH parameters can be seen in Figure 3.In the dry and rainy seasons, the water quality of River Kungkilan before crossing the activities of coal mining companies has a pH value of 6.42 and 6.59 which means still in good condition or Meet the river water quality standard that is 6 -9.However, after crossing the location of coal mining company activities pH parameter value in 5 lowest observation stations with the value of 5.41 and 5.67 or not meet the quality standards of river water.The decrease in pH to tingakat parameters did not meet the quality standards in S-04 and S-05, due to the influence of the quality of waste water and / or runoff water from the disposal area of the coal mining company which is actively operating and also visible on the edge of the brown Kungkilan River water Or brownish red.The brownish-brown color indicates the height of heavy metals, especially Mn.This is because the waste water and / or runoff water is in a corrosive acidic condition [4].The degree of acidity in the coal mine area can be derived from the landfill area (disposal area) derived from sulphide rocks.Potential acid rock drain mine is larger, because rocks have an open sulfite element [6]. Figure 3. Result of water quality analysis of River Kungkilan for pH parameter 3) Iron (Fe) In dry season and rainy season, overall Fe parameters at 5 observation stations have soluble Fe (Fe) which still meet the water quality standard of the river that is below 0.3 mg / l, except dry season at S-04 station Fe value equal to 0.5096 mg / l, which means that it has exceeded the quality standard of river water.This condition is of course influenced by water disposal area at the location of one of the activities of coal mining company that has pH 5.05 and the total Fe value of 5.3498 mg / l, and the pH value at S-04 station itself is worth 5.62 (acid), so Can dissolve the deposited Fe.With this neither too acidic nor near neutral pH, the quality of waste water and / or runoff water (pollution sources) with low Fe, and the presence of natural oxidation processes, Fe causes the Fe in the Kungkilan River to undergo deposition.Waters with a pH of about 7 and contained dissolved oxygen, the dissolved Fe will be easily oxidized to become ferric ions that easily precipitate [2]. 4) Manganese (Mn) In the dry and rainy season, the water quality of Kungkilan River for Mn parameters before crossing the location of coal mining company activities is 0,0060 mg / l which means still fulfill the water quality standard of river water is 0,1 mg / l.The quality of Kungkilan River water for the Mn parameter during the dry season in each observation station after crossing the location of the coal mining company activities has increased, thus exceeding the quality standard of river water with the value between 0.3457 -5.6553 mg / l.In the rainy season, the water quality of Kungkilan River for Mn parameters at S-02 and S-03 observation stations still meet the water quality standard of river water ranging from 0.0072 -0.0342 mg / l, but in S-04 and S-05 remains beyond the standard of river water quality ranging from 0.6987 -1.2592 mg/l. If the Mn value in the dry season is compared with the rainy season, then the Mn value decreases.This condition is also influenced by the relatively neutral pH value and turbulent and shallow river flow that allows the process of infiltration of oxygen from free air, so that Mn 4+ can form [2]. The result of mass balance calculation for the average concentration of TSS parameters in each water segment of Kungkilan River during dry season stated that, Kungkilan River still has capacity for TSS parameter that ranged from 3,43 -30,07 mg / l.This condition is caused by TSS value in Kungkilan River and waste water and / or runoff water still meet the quality standard and flow rate from coal mining activity which flow to Kungkilan River is very small, so that Kungkilan River can still accommodate pollutant load for TSS parameters.The calculation of mass balance in the rainy season, for the average concentration of TSS parameters in segments I, II, and III has a value of 3.19 -27.46 mg / l, which means that the Kungkilan River still has the capacity for TSS parameters, but in the segment IV value of TSS equal to 117,69 mg / l, meaning that in segment IV of River Kungkilan no longer have capacity for TSS parameter.Segments I, II and III still have capacity due to the water quality of Kungkilan River and the flow of waste and / or runoff water from coal mining activities are still in accordance with the quality standards and small flow debit.Segment IV has no more capacity because the water quality of River Kungkilan before entering or upstream segment IV (S-04) TSS parameter has exceeded the standard of river water quality that is 122,67 mg / l with debit 0,096 m3 / dt.The capacity of river water pollution load for TSS parameters is exceeded, due to the running water flow from the disposal area and mining road of coal mining company which is flowing without bypass and the settling process which is still not optimal in MPA.The status of water quality is the level of water quality condition which indicates the condition of pollutant or good condition at a water source within a certain time by comparing with the water quality standard specified.Calculation of water quality status of River Kungkilan using pollution index method for parameters of TSS, pH, Fe, and Mn, with water quality standard of river I. Result of analysis of water quality status of River Kungkilan in dry season is higher than in rainy season.The status of the water quality of the Kungkilan River during the dry season is in 3 (three) status ie good condition (0 ≤ PIj ≤ 1.0), mild pollutant (1.0 ≤ PIj ≤ 5.0), and medium pollutant (5.0 PIj ≤ 10), while in the rainy season is in good condition (0 ≤ PIj ≤ 1.0) and moderate pollutant (5.0 ≤ PIj ≤ 10).In the dry season and the rain water quality condition of River Kungkilan in good condition category is at the point before crossing the activity of coal mining company (station S-01), but at station S-02 and S-03 status of water quality decrease / In the dry season is a mild contaminant, but in the rainy season becomes a good condition.Water quality status of Kungkilan River increased / water quality decreased after being in station S-04 and S-05 or after crossing the location of coal mining company that actively operate with medium pollutant category. Calculation of pollution index in dry season is higher, when compared to rainy season.This is because, in the dry season as a whole heavy metal, especially Mn in River Kungkilan is very high value of river water quality standard, while in the rainy season the overall parameter of Mn and TSS is not too high when compared with the standard water quality of class I river.The status of the water quality of the Kungkilan River has increased, during the dry season at S-02 and S-03 stations in mild contamination conditions, but in the rainy season it is a good condition.This means that the influence of the disposal area and activities of coal mining companies that are in temporary stop operating conditions do not affect significantly, resulting in the process of self-purification in the river water Kungkilan.The process of self-purification or the process of restoring river water quality naturally takes place physically, chemically, and biologically.In natural streams (not concrete channels) can significantly support the process of naturally purifying themselves and leading to improved water quality from its original state [20].River Kungkilan Water Pollution Control Strategy from observation, water quality analysis of Kungkilan River, waste water and / or water analysis of coal mining activities, interview and literature study, to obtain information that can be described aspects and indicators of water pollution control of River Kungkilan, as presented In Table 6. Based on the result of SWOT Analysis, water pollution control of River Kungkilan is obtained, that S <W and O> T which means to be in Quadrant III (-, +) or in Weakness-Opportunity (WO) strategy.The policy of river water pollution control strategy can be done by utilizing the opportunity to overcome the weakness in controlling the water pollution of the affected River Kungkilan due to coal mining activities, so there should be a strategy change to minimize the weaknesses and take advantage of the opportunities. The recommendation of control strategy of river water pollution in West Merapi District, Lahat Regency that can be used as a direction is as follows: A. done, but only at the time of public complaints.6. Monitoring activities of environmental compliance have been carried out, but not by the Regional Environmental Supervisory Official (PPLHD) with functional status.7. Information and supporting data related to Kungkilan River is not complete.8. Coordination between agencies in the control of river water pollution is still lacking.9.The granting of environmental permits to coal mining companies is in accordance with the RTRW and AMDAL Review. Figure 1 . Figure 1.Sampling Map of Wastewater and Kungkilan River Water concentration of constituents in the i-th flow; Qi: iflow rate; Mi: Constituent mass at i-th flow.Capacity load of water contamination of Kungkilan River due to coal mining activity is calculated based on TSS parameter.For Fe and Mn parameters cannot be used, because the results of Fe and Mn analysis on wastewater and river water have different requirements.Analysis of Fe and Mn parameters in the wastewater in the form of Fe and Mn total, while in the water of Fe and Mn dissolved.C. Determining the Status of Water Quality with the Pollution Index (IP) based on Decree of State Minister of Environment Number 115 Year 2003 regarding Guidance on Determination of Water Quality Status.The formula used to determine the level of pollution on the River Kungkilan used formulation as follows: Figure 2 . Figure 2. Result of water quality analysis of River Kungkilan for TSS parameter Figure 4 . Figure 4. Result of water quality analysis of River Kungkilan for Fe parameter Figure 5 . Figure 5. Result of water quality analysis of Kungkilan River for Mn parameter Figure 6 . Figure 6.Results of Mass Calculation of TSS Parameters in Each Segment Figure 7 . 3 . 3 . Figure 7. Contamination Index of River Kungkilan Water Quality Standard of Class I River 1 . Condition of Kungkilan River 1.In the upper reaches of Kungkilan river, there are protected forest areas.2. The quality of Kungkilan River water is set in the criteria of class I designated for use as raw water for drinking water.3. The water quality of Kungkilan River before crossing the activities of coal mining company for parameters of TSS, pH, Fe, and Mn still meet the water quality standard of class I river.4. The quality of Kungkilan River water in some has exceeded the standard water quality of class I river, in dry season of pH, Fe, and Mn parameters, and rainy season parameters of TSS, pH, and Mn. 5.At some point in the rainy season the capacity of the Kungkilan River for TSS parameters has already surpassed, but in the dry season does not go beyond.6. Water quality status of the Kungkilan River in good condition up to medium pollutants.7. The quality of Kungkilan River water at some point and certain parameters have experienced self-purification process.8. Kungkilan River downstream after crossing the activities of coal mining companies are still used by some communities Muara Maung Village, West Merapi District.2. Government Role 1. Inventory and identification of sources of water pollution of Kungkilan River not yet available.2. Determination of the capacity of water pollution load has not been done.3. Regulations on river and wastewater quality standards have been established.4. Licensing regulations concerning the disposal of wastewater already exist.5. Water quality monitoring activities of River Kungkilan have been 2 . The quality of Kungkilan River water after crossing the coal mining company's activities for TSS, pH, and Mn parameters, has generally exceeded the river water quality standard.3. Capacity of water pollution load of River Kungkilan for TSS parameter in dry season still have capacity, but in rainy season Kungkilan River has exceeded the capacity of 67,69 mg/l.4. The status of water quality of Kungkilan River at S-02 and S-03 stations was carried out by selfpurification process, but in S-04 and S-05 stations remain in the medium quality of polluted water status. 5. Based on the results of SWOT analysis, the recommendation of water pollution control strategy of Kungkilan River in West Merapi District, Lahat Regency is Weakness-Opportunity (WO) strategy or change strategy. Table 1 . Coordinate Point of Sampling Location of Kungkilan River (S) and outlet (OT) of Wastewater .2. River Water Quality Analysis A. Data from the River Kungkilan water quality analysis are presented in graphic form compared to river water quality standard.Based on the Governor Regulation of South Sumatera Number 16 Year 2005 on Water Allocation and Water Quality Standard of River, Kungkilan River water is set in Class I. B. Approximate Capacity of Water Pollution Expense based on Decree of State Minister of Environment Number 110 Year 2003 concerning Guidance to Determine Capacity of Water Pollution at Water Source. Table 4 . Results of Wastewater Quality Analysis at Research Sites (Sepetember 2016) Table 5 . Results of Wastewater Quality Analysis at Research Sites (December 2016) Table 6 . Water Study of water class determination and load capacity of water pollution based on result of identification and inventory of source of water pollution of River Kungkilan, and monitored water quality periodically and continuously; pollution strategy 4. Conclusion 1.The quality of wastewater and / or running waterfrom the activities of coal mining companies has not been optimally processed, so it tends to acid and contains heavy metals, especially Mn, and potentially as a source of water pollution of the River Kungkilan. 3. Role of Coal Mining Company 1.Three of the four companies already have KPL, and there is still a potential for waste and / or runoff water that flows bypass to the Kungkilan River.2. Coal mining companies already have Environmental Permits, but not yet based on water pollution load capacity. 3.There are several coal mining companies that have submitted the Implementation Report of the Environmental Management Plan (RPL) and the Environmental Monitoring Plan.
2018-12-05T09:38:50.569Z
2017-05-30T00:00:00.000
{ "year": 2017, "sha1": "1014306ede4e6902e5000a9da9f167c34cb18bff", "oa_license": "CCBYNCSA", "oa_url": "http://www.ojs.pps.unsri.ac.id/index.php/ppsunsri/article/download/52/29", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1014306ede4e6902e5000a9da9f167c34cb18bff", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
244681868
pes2o/s2orc
v3-fos-license
Analysis of current characteristics of transformer bias caused by subway stray current based on measured data The metro stray current, as the driving source which causes the magnetic bias of the urban network transformer, has the characteristics of rapid change, complex propagation path, and many influencing factors.It’s unknown where the stray current leaks from, and its specific value cannot be obtained through measurement methods.Therefore, this paper studies the characteristics of transformer neutral magnetic current bias based on the measured data and fault recording data in the process of measurement, and analyzes the influence of stray current on transformer in urban network. Introduction The transformer bias of the geomagnetic storm on March 13, 1989 induced a blackout in the Quebec power grid [1] .Through this accident, for the first time people realized that the geomagnetic storm may cause serious damage to the transformer bias.China is the largest UHV power transmission country in the world.During the development of UHV, it was found that DC grounding electrode unipolar operation would cause magnetic bias of transformer and bring certain influence [2] .Subsequently, China took the lead in researching the influence of DC grounding electrode unipolar operation on transformer magnetic bias, and now it basically eliminates the risk of magnetism bias accidents caused by grounding electrodes [3] . In 1990, F. J. Lowes discovered the stray current leaking from the subway traction network and the electromagnetic field generated by its [4] .Later, other scholars found that stray current accelerated the electrochemical corrosion of buried metal components and pipes [5] . With the development of the city, in order to alleviate the pressure on ground transportation and meet the travel needs of citizens, the scale of subway has been further expanded, and the degree of coupling between the metro and urban power grid has been further deepened.In 2011, the phenomenon of transformer bias caused by stray current was first discovered in Shenzhen [6] .Later, transformer magnetic bias caused by stray current was also found in Fuzhou, Guangzhou and Changsha [7]~ [9] , and relevant observation and research was carried out for a transformer, and obtained data such as measured current and noise at the neutral point of the transformer. However, the number of stations monitored is small,so the obtained data and the level of magnitude are relatively insufficient which is not enough to study the characteristics of magnetic current bias caused by stray current in urban network. Based on the research of the magnitude level and variation law of the neutral point bias current data of multiple transformers in a city of China, the results show that the magnitude of the bias current and its variation characteristics are very different, in other words, it is difficult to find the characteristic law only through the measured data of the biased magnetic current in a single substation. Therefore, this paper analyzes the measured data of neutral magnetic current of many transformers by statistical method, and studies the magnitude characteristics and risks of transformer magnetic current bias. Observation experiment and measurement site conditions In order to explore the effects of various influencing factors on the transformer caused by metro stray current ,7 stations of 220kV and 1 station of 500kV were selected for observation experiments. In September 2020 and January 2021, a total of 13 220kV transformers and 2 500kV transformers have completed the measurement experiments of magnetic bias current at neutral points. Observation experiments The experiment include an open Hall element (100A/2mA), an open zero-sequence current transformer (100A/5A) and a portable DC bias monitoring device to measure the bias current.When measuring, put the neutral ground wire of the transformer into the opening of the Hall element and zero-sequence current transformer, and the measured value is read and displayed by the DC bias monitoring device in time.The stray current flowing back to earth from the transformer neutral point is positive, and vice versa is negative. Measurement sites 1) Site 1:Two 220kV three-winding transformers connected with YNyn0d11 were measured at the measuring station 1, and the two transformers operated separately.The measurement station 1 is located in the downtown area of the city. The straight-line distance between the two subways is 1500m and 350m respectively, and 370m away from the nearest subway station. There is no turning or transfer of subway line within 1000m of the measurement site. 2) Sites 2:Three 220kV auto-transformers were measured at measuring station 2, located in the intersection area of four subway lines. The linear distance between the measurement station and the four subway lines are 160m, 670m, 700m and 960m, respectively. All the outgoing substations A, B, C and D of measuring station 2 are close to the subway line, so the possibility of metro stray current flows between substation A, B, C and D and measuring station 2 cannot be ruled out. 3) Sites 3:Two 220kV auto-transformers were measured at measuring station 3. The measuring station 3 is close to the subway transfer station, and the straight-line distance from the three subway lines is less than 300m. The operation schedules of the three subway lines are inconsistent, and the measurement station 3 is affected by three subway lines at the same time. The influencing factors that need to be considered in the study are too complicated ,which is not convenient to study the influence of stray current on transformer magnetic bias under a single condition. It is more suitable for studying the characteristics of the influence of multiple lines. 4) Sites 4:Measurement site 4 is equipped with two 220kV transformers, one of which is a threewinding transformer and the other is a double-winding transformer. The linear distances between the measurement site 4 and the subway are: 950m, 700m, and 500m (the two lines overlap each other). 5) Sites 5: The measurement site 4 is located at the edge of the city, with two 220kV three-winding transformers,and only one of the transformers was measured for the neutral bias current. Compared with other test stations, station 4 is farther away from the subway line. The distance from the end point of one of the lines is 3000m, and the straight-line distance from the nearest subway line is 4000m. 6) Sites 6:Site 6 is equipped with three 220kV transformers, located in the suburbs of the city, and there is only one subway line around. The distance between site 6 and the end of the subway line is 1300m. 7) Sites 7:Measurement site 7 is a 220kV station with 2 transformers,and only one of the transformers was measured for the neutral bias current.There is only one subway line around site 7, with a straight line distance of 300m and a distance about 460m from the nearest subway station.In the vicinity of the measurement site 7, the turning of the subway line has a certain impact on the distribution of the geoelectric field around the site 7. 8) Sites 8:Measuring station 8 is an underground 500kV station with two 500kV single-phase autotransformers, located in the downtown area and surrounded by many subway lines. During substation debugging in 2010, the noise of the main transformer did not exceed the limit. In 2015, subway line A and subway line B were put into use successively. Noise and vibration of transformers at station 8 were intensified, with obvious DC magnetic bias. Analysis of measured bias current data The magnitude of the obtained measurement data and its variation characteristics are vary greatly. The following studies the risk of subway transformer biasing through the analysis of influencing factors and the magnitude of the bias current. The magnitude of the bias current is affected by operating factors such as the train start, acceleration, constant speed, and deceleration. In addition, although the subway train has set a operation sequence, some unpredictable changes usually occur in the actual operation process. Therefore, the subway stray current and the interference current that may cause the transformer biasing have the characteristics of pulsating change, randomness, and unpredictability. Influence on the location of the outlet substation Although there are two subway lines around the measurement site 4, but the linear distance from one of them is far away, it can be approximated as being affected by only one subway line. Fig.2 shows the measured curves of neutral magnetic bias current at measuring stations 6 and 5. It is found that the level and fluctuation of the magnetoelectric flux at the neutral point at the measurement site 5 are much higher than that at the site 6. The the Earth electrical structure and the number of surrounding subway are similar between the two. The biggest difference is that the opposite transformers of measuring station 5 are all located in the central urban area, while only a few transformers of station 6 are located near the subway.The stray current is transmitted to the neutral point of the transformer at the measuring station 5 through the transformers at the opposite end of the wire . Therefore, although the measurement site 5 is farther from the end of the subway line than the site 6, the neutral bias current level is higher than that of the measurement site 6.It shows that when studying the influence of metro stray current on urban network transformers, in addition to the relative position of the substation and the subway line, the relative position of the substation and the subway line should also be considered. Influence of subway operating conditions The speed of the subway train and the number of the train that running on the line are adjusted with the size of the passenger flow. There are more trains running on the line during the peak period is large, the train departure and operation interval is short, and the value of stray current fluctuates greatl,while it is relatively small in off-peak period. In order to study the changes of transformer neutral current during peak and off-peak times transformer No.2 at site 8 with dense subway lines around was selected to measure the neutral bias current at 7 different times.Select the representative measurement data at 3 moments for analysis, the curve of the current as shown in Fig. 3 Fig.3, it can be found that the bias current at the neutral point during the peak period is almost negative. Compared with the off-peak period, the magnitude level changes more dramatically and faster. And after the train is out of service, the neutral point bias current of site 8 is almost zero. Therefore, it can be determined that the operation of the subway is the cause of the transformer bias at site 8. The waveform analysis of the disturbance record data of site 8 on the day of the measurement is shown in Fig. 4 and 5. It can be clearly seen that the analysis results are consistent with the existing harmonic analysis conclusions of transformer DC bias.There are obvious differences between the harmonic characteristics of the transformer high-voltage side current during the peak period and that after the subway is out of service: the 2nd, 4th, 8th and 10th harmonic of the transformer high-voltage side current are prominent during subway operation, and the 2nd harmonic content of even harmonic is the highest.However, after the subway shutdown, it is mainly the 5th, 7th, 11th and 13th harmonics, and even harmonics caused by DC magnetic bias is almost eliminated. It proves again that the stray current of subway is the cause of DC magnetic bias of transformer in urban network. Conclusion Through the observations at 8 selected stations, the measured data of the neutral point current of multiple transformers were obtained, and the influence of the stray current on the 220kV and 500kV transformers of the urban power grid was studied. The conclusions are as follows: 1) The current generated by metro stray current at the neutral point of transformer in urban network generally has the characteristics of change randomly, and the situation is very complicated due to many influencing factors, so it is difficult to carry out accurate theoretical calculation; 2) The position of the transformer at the opposite end of the substation is also one of the influencing factors that affect the level of the bias current at the neutral point of the transformer; 3) Metro stray current is the main reason for the transformer DC bias in urban power grid. With the change of subway operating conditions, the influence degree of DC magnetic bias of transformers in urban network is different:the peak period has the deepest impact, the second in flat peak period.And there is basically no impact after the subway shutdown .
2021-11-27T20:06:59.167Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "2c63a670b2154d10f3d87e5bce3936219d47ee9a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2087/1/012084", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2c63a670b2154d10f3d87e5bce3936219d47ee9a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
233209983
pes2o/s2orc
v3-fos-license
Subconvexity for twisted GL(3) L-functions Using the circle method, we obtain subconvex bounds for GL(3) L-functions twisted by a character modulo a prime p, hybrid in the t and p-aspects. for Re(s) > 1. This function has an analytic continuation to the entire plane, and satisfies a functional equation. The analytic conductor is asymptotically of size c(t, π × χ) ≍ (pt) 3 , for a fixed π, see for example [7]. We are interested in bounds of the form for a certain 0 δ 3 4 and for any ε > 0. The bound with δ = 0 follows from the functional equation and the Phragmén-Lindelöf convexity principle for all automorphic L-functions [6] and is called the convexity bound. The Lindelöf hypothesis is the statement with δ = 3 4 . Any improvement δ > 0 is called a subconvex bound; particularly challenging milestones are the Burgess-type bound δ = 3 16 and the Weyl-type bound δ = 1 4 giving bounds (1.2) which are respectively three-fourths and two-thirds of the convexity exponent. In this paper we show the following theorem. Theorem 1. Let π be a Hecke-Maass form for SL 3 (Z) and χ a primitive character modulo a prime p. Assume p < t 8/7 . The following subconvex bound holds: For p = 1 this exponent matches the recent result of [1] which improved upon the then best known subconvexity bound in the t-aspect, due to Munshi [14] with δ = 1 16 . It also improves, in the allowed range for (t, p), the best known subconvexity bound in the hybrid (t, p)-aspect, due to Lin [11] with δ = 1 36 . Previous Results. The literature on the subconvexity problem is rich. We start with mentioning some results in the GL 1 and GL 2 cases for the purpose of assessing the strength of current results for GL 3 L-functions. The Weyl-type bound has been reached in full generality for GL 1 L-functions [18] in the hybrid (t, χ)-aspect. Subconvexity has been established in all aspects simultaneously for GL 2 L-functions by Michel and Venkatesh [12] with unspecified exponent, recently determined as δ = 1 128 by Wu [20]. For (t, χ)-aspect, the Burgess-type subconvex bound has been reached on both aspects simultaneously by Wu [19] conditionally on the Ramanujan hypothesis. For holomorphic cusp forms, the Burgess-type subconvex bound in χ-aspect and Weyl-type subconvex bound in t-aspect has been achieved simultaneously by the second author [9], also conditionally. In the case of GL 3 self-dual L-functions, Li [10] achieved subconvexity with δ = 1 16 in the t-aspect via proving a first moment result for a family of GL 3 × GL 2 L-functions. The proof relies on positivity of the central values of the L-functions involved, allowing one to deduce bounds for a single L-function by dropping all but one term. Since then, there are quite a few results based on this approach and its amplified variants. We list a few here: Blomer [2] achieved subconvexity in the χ-aspect with δ = 1 8 for quadratic characters and Nunes [17] did so in the t-aspect with δ = 1 8 . For (t, χ)-aspect, a similar approach led Huang [4] to reach subconvexity with δ = 1 46 . Self-duality is not a common feature among GL 3 L-functions, so it is desirable to remove this condition. Munshi has successfully done so by deploying circle method, where he starts from a single L-function instead of a moment. He achieves subconvexity in the t-aspect in [14] with δ = 1 16 and this exponent has been recently improved by Aggarwal [1] to δ = 3 40 . Using the GL 2 Petersson trace formulas as an expansion for the delta-symbol, he also succeeded in establishing δ = 1 308 for the χ-aspect in [16]. Inspired by this work, Holowinsky and Nelson wrote the character as a weighted sum of exponentials and Kloosterman sums obtaining δ = 1 36 in the χ-aspect in [3]. With this methodology, afterwards Lin gave a hybrid subconvex bound in the (t, χ)-aspect with δ = 1 36 in [11], however failing to reach the exponent δ = 3 40 . In this paper, we take upon a suggestion made by Munshi in [15] that simplifies the treatment of certain oscillatory integrals in [14]. We also consider a character twist which ended up interacting delicately with the t-aspect circle method, ibid. We improve the best known bound in the joint (t, χ)-aspect and achieve δ = 3 40 , in a restricted range for (t, p). Remark. During preparation of this manuscript we came across Huang and Xu's very recent work [5]. They prove our Theorem 1 with the same t aspect savings of δ t = −3/40 but with a worse p aspect savings of δ p = −1/32. Their proof does not use the conductor lowering trick, but a method called mass transform. Structure of the paper. In Section 2.1 we are left with estimating a finite sum after using the approximate functional equation. In Section 2.2, we apply the conductor lowering trick of Munshi as in [15] in order to reduce the range of the variables introduced by the delta method. We next apply the delta method in Section 2.3 and decouple the oscillations of the GL 3 automorphic coefficients λ(1, n) from those of χ(n)n −it . Once these oscillations are separated, we apply the Poisson summation formula on the GL 1 -sum in Section 3.1 and the GL 3 -Voronoï formula on the GL 3 -sum in Section 3.2. The integral transforms appearing in these two summation formulas feature a common variable, coming from the particular form of the delta method used. This is the point that allows for a simplified treatment of the integrals appearing in [14]; it is carried out in Section 3.3. After an application of Cauchy-Schwarz inequality and Poisson formula in Section 4, the estimates boil down to estimating an arithmetic and an analytic parts. The analytic part consists in various oscillatory integrals and is taken care of by the stationary phase method in Section 5.1 while the arithmetic part is taken care of in Section 5.2. These bounds and an optimization in the conductor lowering parameter then allows to reach the subconvexity result in Section 5.4. Notations. We use the following usual notations: n ∼ N if n ∈ (N, 2N), f ≪ g if there is a constant C > 0 such that |f | C|g|, and f ≍ g if f ≪ g and g ≪ f . All these asymptotic relations are relative to t → ∞ and p → ∞. Also, as is common in the literature, our use of ε is fluid. It refers to an arbitrarily small positive exponent but may change from line to line. We let f ≪ ε g if f ≪ (pt) ε g, and accordingly for ≍ ε . Since the subconvexity bounds are of the form (1.2) and always feature an arbitrarily small exponent ε, this convention is natural and lightens notations considerably. We afford to drop the subscript in indices of summations. for any A > 0. Here with W a fixed smooth nonnegative bump function supported in [1,2] and λ(1, n) are the Fourier-Hecke coefficients of the GL 3 form π. Assuming Lindelöf on average, applying Cauchy-Schwarz and then bounding trivially, we have S(N) ≪ ε N ≪ ε N 1/2 (pt) 3/4 , which corresponds to the convexity bound. Therefore our goal amounts to obtaining any extra savings on this trivial bound at this point. 2.2. Conductor lowering. Let us formally separate the GL 1 and the GL 3 oscillations: Here U = W 1 2 , which is again a smooth bump function supported in [1,2]. The circle method is an analytic expansion of the n = m condition, which is valid only if n − m is in a restricted range [15]. It will turn out to be essential in our final bound if we can even further restrict this gap, after opening up δ n=m via the circle method. For this goal we now introduce Munshi's conductor lowering procedure. Let V be a function supported in [1,2] such that R V (v)dv = 1. Let K > 1 be a parameter to be determined later. Then we write The innermost integral in v, taken separately from the n = m condition, ensures that |n − m| ≪ ε N/K. This is the content of the following lemma. Lemma 1. If n, m ∼ N and V is a smooth function supported in [1,2], then we have that Proof. Let us assume n > m by exchanging n and m if necessary. Then by a change of variables the integral is given by Let us call h := n − m. After integration by parts k times, we see that this integral is asymptotically bounded by Note that h m ∈ [0, 1], and in this region This finishes the proof of the lemma. Let us now assume (2.9) This allows us to ignore certain terms, but is also the reason why Theorem 1 displays a restricted range for (t, p). Delta method. We analytically separate the λ(1, n) from the χ(n)n −it using the delta method [15] which we use in the form Here comes the importance of restricting the range of the shift variable h := n − m, we now can choose Q = N/K (instead of √ N). The function g(q, x) is bounded and satisfies for any A > 1, see [15, (6)]. So for our purposes we can consider the x-integral essentially in the bounded interval |x| ≪ ε 1. The sums and integrals in (2.4) can be interchanged and (2.14) The right side of (2.10) is trivially bounded by O((pt) ε ). So it may look like the loss is not great, but note that now the m, n variables have been separated and so we have an extra sum of length N. Thus we need to save N plus a little extra. Voronoï formulas We now apply Poisson and Voronoï formulas the M and N sums. 3.1. The GL 1 Poisson. To take advantage of both the modulus p of χ and the modulus q of e( a· q ), apply Poisson summation formula after splitting into classes modulo pq. We have that where τ (χ) is the Gauss sum of χ and where we used the notation For the behaviour of I(m, x) there are two regimes depending on the size of q. Put Proof. Applying Poisson summation modulo pq we get, If (p, q) = 1 we can apply Chinese Remainder Theorem and factor the arithmetic sum modulo pq, so that we get If q = p ℓ q ′ with (q ′ , p) = 1 and ℓ 1, we factorize the sum into prime powers. For primes different than q, we only get the condition m ≡ ap mod q ′ . For the p-factor, write u = u 1 + pu 2 where u 1 runs modulo p and u 2 runs modulo p ℓ . The sum becomes, The second sum is p ℓ δ m≡ap mod p ℓ , thus in the first sum (m − ap)/p ℓ is an integer, and using properties of Gauss sums we obtain ψ(m, q, a). For the I(m, x) integral, we apply the stationary phase argument as in [8]. Introduce the In the large q regime, Nx/qQ ≪ t 1−δ , therefore the stationary point would fall inside the support of U only if m ≍ ε M 0 . If q ≍ √ NK/t, then Nm/pq can go up to size t without harming the stationary point. That is |m| ≪ pqt/N ≍ M 0 . If q ≪ t δ √ NK/t, then for a stationary point to occur the Nm/pq term and the Nx/qQ term should be of the same size, i.e. |m| ≍ M 0 . In these cases where the stationary point is inside the support of U, the second derivative bound gives I(m, x) ≪ t −1/2 . Now for the constant term, if (p, q) = 1 then χ(m) gets rid of the m = 0 term. Also, if p 2 |q, then ap is never congruent to zero modulo q. So we only have the case p q. But also we know that the stationary point is outside of the support of U. We can now bound roughly * At this point, just for tracking our progress, if we apply a trivial bound on (2.12), we get where we used the Ramanujan-on-average bound [10, (2.6)] on the GL 3 Fourier coefficients after Cauchy-Schwarz for the N ≪ ε N bound. Notice that by this application of Poisson, we gained our foothold back from a lost N = (pt) 3/2 position to a lost (pt) 1/2 from convexity. 3.2. The GL 3 Voronoï. The GL 3 Voronoï summation [13], see also [2,Lemma 3], reads (3.14) Here we introduce the Mellin-Barnes integral, for ℓ ∈ {0, 1}, g(−s)ds, (3.15) and define G ± = G 0 ± iG 1 . In the above, (α 1 , α 2 , α 3 ) are the Langlands parameters of π and the function g is the Mellin transform of g defined by We work with g(y) = U( y N )y iv e( xy qQ ) from the N sum (2.14). Also from now on we will focus on G = G + as the minus case is treated mutatis mutandis. The function g is supported in [N, 2N], then in the range yN ≫ N ε we can extract the modulus and phase of G(y) explicitly [10, Lemma 2.1]. This motivates to separate the treatment of (3.14) into two cases: the complementary range n 2 1 n 2 q 3 ≪ N ε N and the main range Let us call N main the contribution from n 2 1 n 2 N ε q 3 N terms in (3.14), and N comp the remaining sum. The decomposition N = N main + N comp also gives us a decomposition S(N) = S main (N) + S comp (N) via (2.12). Proof. Firstly note that with s = σ + iτ . Secondly note that from stationary phase methods the integral is negligible unless which translates to |v − τ | ≍ xN qQ . Then the second derivative yields g(−s) ≪ N −σ |τ − v| −1/2 . Now in (3.15) the ratio of Gamma functions are approximated as (1 + |τ |) 3 2 +3σ using the fact that α 1 + α 2 + α 3 = 0. Move the line of integration to σ = −3/2. For ℓ = 1 the integrand is analytic on the region in between since | Re(α i )| 1/2. For ℓ = 0 we pass three poles at s = −1 − α i , i ∈ {1, 2, 3}. At that vertical line of integration, we have that the integral is supported essentially for |v − τ | ≪ ε xN/qQ. We now bound The contribution of the residues is bounded by For the integral we make a change of variables τ → τ + v and separate it into the regions |τ | < 1 and 1 |τ | ≪ xN/qQ. So the integral we have is bounded by In the first integral |τ | −1/2 is integrable so its contribution is K −3 . For the second integral we have decided to ignore |τ | −1/2 1 term and bound it simply by K −2 . For the third integral, splitting the integral dyadically K 2 i+1 |v − τ | K 2 i we obtain the result. Using the bound (3.17) and the Weyl bound for Kloosterman sums in (3.14), we get a bound for the GL 3 -sum in the complementary range given by We apply Cauchy inequality and get The first term is essentially bounded. We appeal to the Ramanujan bound on average [10, (2.6)], so that the second term is also essentially bounded. We therefore get This is a subconvex bound for any value of K > (pt) 1/10 . The main range. In this range the weight function G behaves as follows. Since g is supported in [N, 2N], then in the range zN ≫ N ε we can extract the modulus and phase of G ± (z) [10, Lemma 2.1] and write for any A > 0, for a certain constant α, depending only on π. Substituting z = n 2 1 n 2 /q 3 , g(y) = y iv e xy qQ U y N and the asymptotic expansion above into (3.14) and changing the variable y → Ny, we obtain N main =α N 2/3+iv q n 1 |q,n 2 n 2 1 n 2 ≫q 3 /N Integrating by parts repeatedly the integral appearing above, we get a term majorized by for all k 0, so that the integral is vanishingly small except when where we recall the bounds Simplifying the x-integral. We now concentrate on the x and the v-integrals, This motivates the change of variable y = ξ + u with u ≪ ε qQ/N (note that this is small since qQ/N ≪ 1/K). We get Here we introduced where U 1 (ξ) = U(ξ)U(ξ + u)(ξ + u) −1/3 . It is a nonoscillating bump function, and we may as well drop the subscript from now on. The u integral is on a small interval, of size ≪ ε 1/K. We bound it trivially by the supremum of its value times the length of the integral. Furthermore, we cut the q sum dyadically into pieces q ∼ C. Therefore we have Poisson Summation Formula We bring the m and q sums inside and take absolute values, thus giving up on obtaining sign cancellation from λ(n 2 , n 1 ). We now have, Expanding the square in Ω and explicitly denoting the bound on n 2 1 n 2 by way of a bump function ϕ with supp ϕ ⊆ [1, 2], we can write Here I ′ or C ′ indicate that in (4.2), the variables are taken to be primed. In the next lemma we apply Poisson summation formula on the n 2 -summation. To take advantage of both moduli q/n 1 and q ′ /n 1 in the Kloosterman sums C and C ′ we consider it modulo B := qq ′ /n 2 1 . Lemma 4. Let Ξ = Ξ(q, q ′ , m, m ′ , a, a ′ ) = ψ(m, q, a)ψ(m ′ , q ′ , a ′ )/qq ′ . With Ω and B as above, Bounds on the integrals. We use the oscillations of the I-integral in order to get a bound on the n 2 -range and to bound I. For that purpose we apply the stationary phase theorem to I, thus obtaining the oscillation in w, and then using this information to apply stationary phase to I. Lemma 5. The integral I is vanishingly small except in the range n 2 ≪ ε N 1 := C K . Moreover we have I ≪ t −1 and, for C ≫ t δ max{ √ NK/t, N/Kt} with any fixed δ > 0, we have Proof. We follow [15,Section 4.2] and apply the stationary phase bound [8,Main Theorem]. First let us study the oscillation in the I(m, N 0 w, q, p) integral. Recall that where A = −Nm/pq and B = (NN 0 w) 1/3 /q. Except for when B ≍ ε t, one can deduce I ≪ ε 1/ √ t from the second derivative bound on (5.2). But if B ≍ ε t, we can also apply the argument in [15, Lemma 1]: When B ≍ ε t, repeated integration by parts gives |ξ 1 − ξ 2 | ≪ 1/t on the inner integral. This proves the I ≪ t −1 bound. We also have I essentially supported in n 2 ≪ N 1 . This is because if we open up I and I ′ and focus on the oscillation in the w-variable we get e (NN 0 (ξ + u)) 1/3 q The stationary point of this oscillation is unbounded if n 2 N 0 /qq ′ ≫ ε (NN 0 ) 1/3 /C so that I is negligible except when From now on assume that C ≫ t δ √ NK/t, thus B ≪ t 1−δ . Because we are in this large modulus regime, from Lemma 2, we have that |m| ≍ ε M 0 , and thus A ≍ ε t. From the bound m ≍ ε M 0 obtained from Lemma 2, we have In particular, we deduce that B/A → 0 when t → +∞. For each t, the stationary point We can also write it as ξ + (B/3A)ξ 1/3 − t/2πA = 0, and view this as a perturbed cubic equation, with ǫ := B/3A. Note that ǫ → 0 as t → ∞. We assume the solution is a power series ξ = η 0 + η 1 ǫ + η 2 ǫ 2 + · · · and solve for the coefficients η i . We get that We can apply the stationary phase method to obtain a description of I(m, N 0 w, q) with explicit phase and modulus. The modulus is controlled by the second derivative of the phase, given by The phase is given by φ(ξ t ), and expanding the cube root term binomially we get The stationary phase method hence gives 11) up to an error term of arbitrary polynomial decay in pt. The w dependence is only in B and pulling the w-integral inside, we measure how much we can gain from stationary phase. Note that we have used ξ −it The inner w integral in (4.8) is rewritten as which does not depend on w and h ≪ 1. We can attain the bound (5.1) by applying the second derivative bound to (5.12). Changing variable w → w 3 , we have Note that the w dependence in the big-oh term is a power series in w starting from w 2 . Also since B/A ≪ t −δ the big-oh term is smaller than the first term. Thus, the stationary point of this oscillatory integral is in the support of ϕ only if (5.14) which translates to n 2 ≍ ε N 1 . By the stationary phase method I is negligibly small otherwise. The first term in the exponential is killed in the second derivative and thus the size of the second derivative is γ. Which means the stationary phase method saves a factor of γ −1/2 . 5.2. Bound on Kloosterman sums. We now bound C. These sums have been treated in a previous paper of Munshi [14,Lemma 11] and bounded explicitly in an elementary way. Precisely we have the following: for n 2 = 0 we have C = 0 unless q = q ′ and then If n 2 = 0, we have C ≪ B( q n 1 , q ′ n 1 , n 2 ). where the terms on the right correspond respectively to the subsum of (4.7) corresponding to the indices n 2 = 0 and a = a ′ , n 2 = 0 and a = a ′ , and finally to nonzero n 2 = 0. We also put Ω 0 = Ω 0= + Ω 0 = accordingly. Since Ω 1/2 ≪ Ω We will treat each case separately. The following lemma will be used to show that all the n 1 sums appearing are uniformly bounded. Lemma 6. Let α 7/6 be an exponent. Then Proof. We apply Cauchy-Schwarz inequality in the n 1 sum, separating n −α Note that for α = 7/6 the second factor is n The first factor is either a part of a convergent sum if α > 7/6 or is the harmonic sum when we have α = 7/6 and can be bounded by C ε ≪ Q ε ≪ ε 1. For the second factor we apply summation by parts. Put A(n) = n 2 1 n 2 n |λ(n 2 , n 1 )| 2 n 2 1 n 2 . (5.21) By the Ramanujan bound on average, we have that, A(n) ≪ n ε . Now the second factor can be rewritten using integration by parts as This proves the result. First consider the zero frequency case n 2 = 0. The value of C is nonzero only if q = q ′ by the above. Inputting the bound (5.15) on C, thus bounding the arithmetic sum |Ξ| by 1/q 2 and the integral I by (5.1) we get . In the small q regime, as M 0 ≍ ε p √ K √ N the bound is ≪ ε K 2 p n 4 1 t , which is strictly less than the first case as K ≪ t 1−ε . After inputing the Ω 1/2 0= ≪ ε K 1/2 p 1/2 /n 2 1 bound in (4.4), applying Lemma 6 to deal with the n 1 sum, and using (3.38), we obtain that the corresponding contribution in S main is Zero frequency, a = a ′ case. When a = a ′ the expected bound on average is given by Therefore, In the large q regime plugging in M 0 ≍ pCt/N yields Ω 0 = ≪ ε p 2 tK N n 3 1 . For the small q regime, we have the bound Ω 0 = ≪ ε K 5/2 p 3 N 3/2 n 3 1 for the same quantity. For the large q the corresponding contribution in S main is Notice that for N > (pt), which is all we have to consider, the contribution from (5.25) dominates the contribution from (5.28). A similar calculation for the small q regime yields, S main,0 = ≪ ε pK 3/2 ≪ N 1/2 (pt) 3/4 K 3/4 (pt) 3/8 p N 1/2 . (5.29) We can compare this with (5.25) and note that for N ≫ (pt) 4/3 and p t 2 it is smaller. Non-zero frequencies: small moduli. For large values of the modulus C, Lemma 5 provides a strong bound for the integral I, and the corresponding terms will be treated in the next section. Here, we treat the case of small moduli and therefore assume C ≪ t δ √ NK/t or C ≪ t δ N/Kt. Concerning the contribution of small moduli (5.33), with the choice K = (pt) 2/5 it is bounded by Remark. We can allow for the larger range p < t 15/13 , for which the bound (5.38) is still subconvex.
2021-04-13T01:15:49.084Z
2021-04-10T00:00:00.000
{ "year": 2021, "sha1": "e9be7816f43f782b3fbc399481164ea8c5679299", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f004f05b44cafe6f805c4bacf4e1e30d1b5ba776", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
58644312
pes2o/s2orc
v3-fos-license
Hearing outcomes in children with meningitis at Red Cross War Memorial Children ’ s Hospital , Cape Town , South Africa : A silent crisis Bacterial meningitis has been identified as one of the most common causes of hearing impairment in both adults and children.[1] Globally, 10% of children with acute bacterial meningitis may acquire some degree of permanent bilateral or unilateral hearing loss, which usually develops within 48 hours of hospital admission.[2] South African (SA) studies have documented bacterial meningitis as a risk factor for hearing loss.[3,4] Furthermore, 6% of all acquired sensorineural hearing loss (SNHL) in the paediatric population can be attributed to bacterial meningitis.[3] A 12-year study examining the risk of neurological abnormalities and central auditory dysfunction following meningitis reported that SNHL may remain stable, or may fluctuate spontaneously.[2] Permanent and transient hearing loss develops within the first few days after the onset of meningitis,[1] highlighting the importance of early audiology referrals and accurate audiological diagnosis.[5] It has been reported that delayed diagnosis of hearing loss, which results in the lack of sufficient or quality language input during the critical period of speech and language development, may lead to poor language and academic performance.[6] Children who have had meningitis should therefore ideally be referred for sensitive and comprehensive audiological assessments before discharge from hospital. Despite a large body of evidence illustrating the need for post-meningitis audiological evaluation, many affected children are not assessed and are often lost to follow-up. The poor follow-up rate is probably due to lack of referral and may indicate a lack of awareness that hearing assessments following meningitis are necessary.[1,2,5] Post-meningitis hearing loss may lead to cochlear ossification, an inflammatory process where the inner ear becomes fibrosed and eventually ossified. Cochlear ossification, which results in increased risk of partial insertion of the cochlear implant electrode, may occur as early as 3 4 weeks after the onset of meningitis.[7,8] Findings of a retrospective review of 47 cochlear implants done in patients with bilateral profound hearing loss following bacterial meningitis showed that cochlear ossification, diagnosed with magnetic resonance imaging, occurred in almost half (44%) of patients.[9] Ossification typically starts at the round window, proceeding apically.[9] Traditional cochlear implantation methods (round window or cochleostomy insertion) are therefore not possible. Any child with post-meningitis severe to profound hearing loss therefore needs urgent temporal bone imaging for cochlear ossification and subsequent referral for a cochlear implant. RESEARCH Bacterial meningitis has been identified as one of the most common causes of hearing impairment in both adults and children. [1]Globally, 10% of children with acute bacterial meningitis may acquire some degree of permanent bilateral or unilateral hearing loss, which usually develops within 48 hours of hospital admission. [2]South African (SA) studies have documented bacterial meningitis as a risk factor for hearing loss. [3,4]Furthermore, 6% of all acquired sensorineural hearing loss (SNHL) in the paediatric population can be attributed to bacterial meningitis. [3]A 12-year study examining the risk of neurological abnormalities and central auditory dysfunction following meningitis reported that SNHL may remain stable, or may fluctuate spontaneously. [2]ermanent and transient hearing loss develops within the first few days after the onset of meningitis, [1] highlighting the importance of early audiology referrals and accurate audiological diagnosis. [5]t has been reported that delayed diagnosis of hearing loss, which results in the lack of sufficient or quality language input during the critical period of speech and language development, may lead to poor language and academic performance. [6]Children who have had meningitis should therefore ideally be referred for sensitive and comprehensive audiological assessments before discharge from hospital.Despite a large body of evidence illustrating the need for post-meningitis audiological evaluation, many affected children are not assessed and are often lost to follow-up.The poor follow-up rate is probably due to lack of referral and may indicate a lack of awareness that hearing assessments following meningitis are necessary. [1,2,5]ost-meningitis hearing loss may lead to cochlear ossification, an inflammatory process where the inner ear becomes fibrosed and eventually ossified.Cochlear ossification, which results in increased risk of partial insertion of the cochlear implant electrode, may occur as early as 3 -4 weeks after the onset of meningitis. [7,8]Findings of a retrospective review of 47 cochlear implants done in patients with bilateral profound hearing loss following bacterial meningitis showed that cochlear ossification, diagnosed with magnetic resonance imaging, occurred in almost half (44%) of patients. [9]Ossification typically starts at the round window, proceeding apically. [9]Traditional cochlear implantation methods (round window or cochleostomy insertion) are therefore not possible.Any child with post-meningitis severe to profound hearing loss therefore needs urgent temporal bone imaging for cochlear ossification and subsequent referral for a cochlear implant. RESEARCH Cross War Memorial Children's Hospital (RCWMCH), a paediatric tertiary hospital in Cape Town, SA. Setting RCWMCH is one of only two dedicated paediatric academic hospitals in sub-Saharan Africa.It serves children from birth to 13 years of age.The Department of Audiology at RCWMCH assesses and treats ~300 children per month. Ethical approval The Procedure A retrospective folder review of all children diagnosed with meningitis and referred for audiological evaluation was conducted over an 18-month period between 1 January 2015 and 30 June 2016.A large amount of data can be accumulated for patients with a retrospective review; however, some limitations of this design include incomplete data sets, illegible handwriting in patient records, and missing records. [10]Data collected included demographic information, age at diagnosis, sex, and clinical and audiometric information.Audiometric information included time from meningitis diagnosis to audiology referral, as well as type and degree of hearing loss.Clinical information included lumbar puncture (LP) biochemistry and microbiology results, and computed tomography (CT) brain scan results. The audiological test battery included otoacoustic emissions, tympanometry and auditory brainstem responses (ABRs) where indicated.Behavioural audiometry (pure tone testing) was used where age-appropriate, to determine the degree and configuration of the hearing loss.Hearing outcomes, including the degree of the hearing loss in each ear, were determined.According to the American Speech and Hearing Association, normal peripheral hearing was defined as thresholds <15 decibels of hearing loss (dB HL). [11]Hearing loss was present if thresholds were >15 dB HL across frequencies (250 -8 000 Hz). Data were captured on a Microsoft Excel spreadsheet, 2010 version (Microsoft, USA).Descriptive statistical methods were used to analyse patient information and to describe the clinical and audiometric characteristics of the patients. Demographics The total number of inpatients diagnosed with unspecified meningitis at RCWMCH between January 2015 and June 2016 was 345.The total number of inpatients diagnosed with bacterial meningitis was 68.A total of 16 children (23.5%) with suspected or confirmed meningitis were referred to the Department of Audiology at RCWMCH over the above 18-month period.Two children were excluded, one with viral encephalitis and the other with neurocysticercosis.Eight were male (57.1%) and 6 were female (42.9%).The mean (standard deviation (SD)) age at diagnosis was 3.1 months (2.3; range 1 -8). Clinical findings Twelve children had confirmed bacterial meningitis on LP.The remaining 2 children commenced treatment based on clinical presentation alone.The most common organism cultured was Streptococcus pneumoniae, found in 2/12 children (16.7%).There was one case of meningococcal meningitis.All the children had up-todate immunisations.Brain CT scans were performed in 6/14 children (42.9%).Features of meningitis were reported in all but one case (5/6, 83.3%).Intracranial complications were reported in 5/14 children (35.7%).No features of middle-ear effusions or opacification of the mastoid air cell system that could be attributed to an otogenic cause were detected in any of the scans.All the children were treated with intravenous antibiotics. Audiology findings The mean (SD) time from diagnosis to audiology referral was 17 weeks (16.9; range 1 -60).One child was referred almost 1 year after having meningitis.The overall prevalence of hearing loss was 42.9%.Table 2 depicts the audiological profile of participants following meningitis. Discussion The objective of this study was to determine the hearing outcomes of children with meningitis.Our results showed that nearly one-third of children presented with severe to profound SNHL, a higher incidence of post-meningitis hearing loss than reported in the international literature. [12]The higher incidence could be attributed to the study population being from a tertiary institution, which typically receives a high proportion of critically ill children.Karanja et al. [13] also reported a high prevalence of SNHL (43.4%) following meningitis at a tertiary hospital.The higher prevalence of post-meningitis SNHL in that study may be attributed to their relatively large sample size.High incidences of meningitis have been reported in developing countries, owing to poor access to healthcare and limited-resource environments. [14] major finding in the current study was the delay in referral to audiology following meningitis.The average time from meningitis diagnosis to referral was 17 weeks, with one child being referred nearly 1 year after having meningitis.Less than a quarter (23.5%) of all children diagnosed with bacterial meningitis at RCWMCH over the 18-month study period were referred to audiology.The current literature suggests that ~40% of patients who have had bacterial meningitis in SA are not referred for audiological assessment. [1]ompared with developed countries, the referral rate in the current study is substantially lower.In the UK, 7.7 -27% of children are not referred for audiological assessment following meningitis. [15]urthermore, with nearly one-third of the children in the current study presenting with severe to profound hearing loss, the delay in referral and subsequent diagnosis may have had a detrimental impact on their eligibility for cochlear implantation. Any child with bilateral severe to profound hearing loss and CT findings of cochlear ossification should be counselled on an aural rehabilitation approach, such as oral, aural or total communication.However, prompt audiological testing and appropriate referrals to an otolaryngologist and a cochlear implant team are crucial in early detection of children with profound hearing loss, whose families may choose to pursue cochlear implantation as the preferred aural rehabilitation option. Despite the small sample size in the current study, our results highlight important factors that contribute to hearing outcomes following meningitis in SA.The audiological test battery included tympanometry without measuring acoustic reflexes, so potential objective evidence of hearing loss could have been missed.However, ABRs were done as an objective test where indicated.The participants in the study were confined to inpatients, so data from the outpatient RESEARCH department were not included in the audit, which may have skewed results.However, inclusion of outpatient referrals is a basis for future research. Conclusions Post-meningitis hearing outcomes in children at RCWMCH are similar to international findings.An important finding in the current study was the substantial delay in referral to audiology.Lack of awareness on the part of healthcare professionals regarding the need for audiological referrals and the appropriate timing thereof, and poor access to healthcare, could explain the high rate of delayed diagnoses of hearing impairment following meningitis.Recommended solutions in a resource-constrained environment include education of healthcare professionals in order to increase awareness of prompt audiological testing for children diagnosed with meningitis, which will result in early diagnosis and suitable management to improve hearing outcomes. Update.Since the current study was conducted, the audiology and ENT departments at RCWMCH have embarked on an extensive awareness and education campaign, which resulted in an increased number of appropriate and early referrals. Declaration.None. study was approved by the University of Cape Town (UCT) Department of Surgery (ref.no.2016/075) and the UCT Human Resources Ethics Committee (ref.no.028/2017).
2019-01-22T22:33:28.638Z
2018-10-26T00:00:00.000
{ "year": 2018, "sha1": "2b29f5a24974359ce0eb109e6578d0f1ed5afca4", "oa_license": "CCBYNC", "oa_url": "http://www.samj.org.za/index.php/samj/article/download/12455/8661", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b29f5a24974359ce0eb109e6578d0f1ed5afca4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235740575
pes2o/s2orc
v3-fos-license
COGONGRASS ROOT EXTRACT FROM FIVE DIFFERENT SOILS TYPES FOR SUPPRESSING PURPLE BLOTCH AND INCREASING GROWTH AND YIELD OF SHALLOTS Cogongrass root extract from five different soils types for suppressing purple blotch and increasing growth and yield of shallots. The aim of this study was to examine the efficacy of cogongrass (Imperata cylindrica (L.) Beauv.) root extracts from five different soil types (Typic Udipsamments, Aeric Endoaqualfs (=Gleisal Eutrik), Typic Quartzipsamments (=Regosol Eutrik), Aquertic Chromic Hapludalfs, and Pachic Hapludolls) in suppressing purple blotch and increasing growth and yield of shallots. Split plot design was used with 13 treatments repeated three times, and 18 plants plot. The treatments consisted of control, fungicide propineb applied before and after inoculation, and five types of cogongrass root extract 50, 60, and 70% concentration applied before and after inoculation. Results showed that cogongrass root extract collected from Pachic Hapludolls which was applied before inoculation had significant effect on the highest pathosystem component indicated by delaying the incubation period, suppressing the intensity of the disease, slowing down the infection rate, and decreasing values of AUDPC as 41.85, 69.87, 75.13, and 67.63%, respectively, compared to control. The cogongrass root extract from Pachic Hapludolls applied before inoculation could increase plant fresh and dry weight plant, tuber weight plant, plant fresh and dry weight plot, and tuber dry weight plot as 42.7, 49.6, 51.92, 66.75, 72.29, and 73.53%, respectively, compared to INTRODUCTION Purple blotch caused by Alternaria porri (Ellis) Cif. is a disease in shallots that is very damaging and causes significant yield loss (Dar et al., 2020). Purple blotch was reported to cause a decrease in shallot production by up to 97% in onion fields worldwide (Kareem et al., 2012). Efforts to control the disease are still emphasized on the use of synthetic chemical fungicides, in that their continuous use has a negative impact on the environment, the emergence of new strains and also damages human health (Idris & Nurmansyah, 2015;Sari et al., 2016). Therefore, it is necessary to reduce the use of the chemical fungicides. One of them is the use of botanical fungicides that are safe and environmentally friendly. Many plants can be used as botanical fungicides including cogongrass (Imperata cylindrica) (Gusmarini et al., 2014). Cogongrass contains alkaloids, flavonoids, steroids, terpenoids, and tannins which have antimicrobial effects and are a form of plant defense mechanism against pathogenic microbes (Seniwaty et al., 2009;Gurjar et al., 2012). Cogongrass can be found in a variety of habitats and a variety of soil types from natural areas that are relatively undisturbed and tolerant of a variety of growing conditions including shade, drought, and poor soil quality (Bryson et al., 2010). The role of chemical compounds produced by cogongrass depends on the soil-plant system (Mallik, 2000). This study aimed to test the efficacy of cogongrass root extract from five different soil types to suppress purple blotch, increase growth and yield of shallots. MATERIALS AND METHODS Research Site. This research was carried out at the Laboratory of Plant Protection, Faculty of Agriculture, and Integrated Laboratory, Jenderal Soedirman University, and Agricultural Clinic of the Agriculture and Food Security Service at Tegal Regency and on the land of Sidapurna Village, Dukuhturi, Tegal (-6053'32", 10905'36", 24 m above sea level, with soil type Aeric Endoaqualfs (=Gleisal Eutrik), from August 2018 to June 2019. Experimental Design. The in vitro experiment used a completely randomized design with 18 treatments and 3 replicates, consisted of a comparative fungicide (propineb) and cogongrass root extract treatment from five types of soil, namely Typic Udipsamments, Aeric Endoaqualfs, Typic Quartzipsamments (=Regosol Eutrik), Aquertic Chromic Hapludalfs, and Pachic Hapludolls. The in vivo experiment used a split plot design consisted of 13 treatments, 3 replicates, and 18 plants per plot. The main plot was the application time, namely before and after inoculation. Subplot was the treatment of cogongrass root extract from five types of soil. Cogongrass sampling technique was cluster random sampling (Taherdoost, 2016). Preparation of Cogongrass Root Extract. The development of cogongrass extract began with preparing simplicia according to the method of Ahmad et al. (2014), followed by extraction with the 96% ethanol maceration method (Zhang et al., 2018). Furthermore, the filtrate was concentrated with a rotary evaporator at a temperature of 30-40 ºC (Muchtaromah et al., 2018), resulting in a concentrated total extract. Determination of flavonoid levels was carried out by comparing quercetin (Chandra et al., 2014). Preparation of the Pathogen Suspension. Pathogenic fungus A. porri was propagated using sterile potato dextrose broth (PDB) aseptically. The culture was shaken with a shaking machine (VRN-200) for 10 days at medium speed at room temperature and ready for use (Abdel-Hafez et al., 2013). Meanwhile, suspension of A. porri was prepared by adding 900 mL of distilled water to the erlenmeyer containing 100 mL of pure culture). Preparation of Shallot Seeds. The shallot seeds used were certified onion seeds of the Bima Brebes variety from Pokar Suka Tani, Sidapurna Village, Dukuhturi District, Tegal Regency. The seed tubers used were medium sized tubers (5-10 g). The appearance of seed tubers must be healthy, well-pithy (dense, not wrinkled), and bright in color (not dull), the shelf life of seeds was 3 months (Sumarni et al., 2012). Preparation of Land. The soil was processed until it was loose, then the beds were made with a length of 13 m, 1.20 m wide, 0.5 m gutter width with 0.6 m gutter depth. The plot size of each treatment was 70 × 30 cm (adjusting the land condition). Basic fertilizer was given before the last hoeing (7 days before planting), namely using NPK Mutiara fertilizer (16: 16: 16) 500 kg ha -1 , SP-36 100 kg ha -1 , KCl 60 kg ha -1 by spreading over the beds then stirring land (according to farmer's habits). Planting and Fertilization. Seed tubers were planted at a spacing of 15 × 15 cm with a stick, the holes were made as deep as the average tuber. The shallot bulb was inserted into the hole in the plant using a screwlike motion, so that the tip of the bulb appears flat with the soil surface. The seeds were not planted too deep. After planting, the entire land was watered with a fine grain. The first follow-up fertilization in the form of N and K fertilizers was carried out at the age of 10 days after planting (DAP) and the second at the age of 30 DAP, 0.5 doses each. The dose of N fertilizer was 200 kg ha -1 and the dose of K fertilizer was 100 kg ha -1 (Sumarni et al., 2012). Fungal Pathogen Inoculation. Shallot plants were inoculated by spraying A. porri conidia suspension with a density of 1 × 10 6 conidia mL -1 of water when the shallot plants were 3 weeks after planting (WAP) (Marlitasari et al., 2016). Each plant was sprayed with 5 mL of the suspension (Rai & Singh, 1980). Spraying the suspension was carried out at 05.30 PM. Furthermore, it was closed with a polyethylene lid for 48 hours to maintain high humidity, after 48 hours the lid was opened and the plants were left in normal conditions (Marlitasari et al., 2016). Plant Maintenance. Watering was carried out to rinse the leaves of the plant, namely to reduce the soil splash that sticks to the shallots. Maintenance of shallot plants was also carried out by controlling weeds by manually weeding. Meanwhile, to control pests, a bioinsecticide was used, namely Bio B10 with the active ingredient Beauveria bassiana secondary metabolites. The concentration used was 10 mL L -1 at intervals of 3 days (based on farmer habits). In Vivo Test. Cogongrass root extract was treated twice, namely before inoculation (S1) and after inoculation (S2). The extract application before inoculation was carried out 3 times, namely when the plants were 10, 15, and 20 DAP. The first application after inoculation was carried out 24 hours after inoculation and an interval of 5 days after the first application (Jhala et al., 2017), namely 22, 27, 32 DAP. The dose used was 5 mL plant -1 (Tombe et al., 2012). Observed Variables. The inhibitory ability test was carried out by the disc diffusion method (Liu et al., 2016). The disc method was carried out using disc paper with a diameter of 6 mm. The antifungal activity was determined by the formula (Suryanto et al., 2011): The level of inhibition was calculated by the equation of Bekker et al. (2006), namely: The method of measuring the dry colony weight of A. porri was by preparing pathogenic fungi from the 6 days inhibitory test results, adding 10 mL of 1% HCl to each Petri dish and heating it in a water bath until it melts, pouring it on filter paper with known weight, spraying it with sterile water, and the remaining colonies on filter paper were dried in an incubator at 30 o C for 24 hours, then weighed twice (Supriyanto et al., 2020). Observation of the incubation period was carried out every day from the time the plants were inoculated until the time symptoms appeared. Disease intensity was recorded 10 day after inoculation (DAI) (Jhala et al., 2017), namely when the plants were 33, 36, 39, 42, 45, and 48 DAP and calculated by the formula: x y r   r = inhibition zone, x = fungal colony radius which has stunted growth (mm), y = radius of fungal colony with normal growth (mm). Area Under Disease Progress Curve (AUDPC) was caldulated by a formula of Ling et al. (2017) as followed: The infection rate was calculated using epidemiological formula of Van Der Plank (1963): Observations of plant height, number of leaves, number of tillers were carried out on 10 sample plants per experimental plot which were determined systematically with a U pattern (Setiawati et al., 2011), and starting at 7, 14, 21, 28, and 35 DAP. Leaf chlorophyll was measured using SPAD at the end of the vegetative phase of the plant, while yield component observations were measured after harvest. Leaf area measurements were carried out when the plants were 35 DAP with the cylinder method (Maftuchah & Idiyah, 1995). The total phenol content of leeks was measured by the Folin-Ciocalteu method from Blainski et al. (2013) modified. Data Analysis. Data analysis of variance was carried out, if there was a significant difference in treatment, the DMRT test was carried out at the 5% level. RESULTS AND DISCUSSION Based on the results of the analysis, the treatment of cogongrass root extract from five types of soil gave differences to the growth of A. porri (Table 1). Treatment of root extract 60% collected from Pachic Hapludolls gave the better results compared to Aeric Endoaqualfs cogongrass root extract and was not significantly different from other treatments, including the comparator propineb fungicide, in inhibiting the development of A. porri colonies, with an inhibition of 55.80%. This showed that almost all extracts had the same potential to suppress the growth of A. porri in in vitro tests. The chemical compounds produced by cogongrass roots were thought to be different in each type of soil, so that they had a different effect on microbes. Soil allelopathy is influenced by soil conditions, growing conditions of giver and recipient plants, and climatic conditions. Soil factors that influence are soil texture, organic and inorganic materials, moisture and organisms that affect phytotoxin activity in the soil (Kobayashi, 2004). The lowest percentage of inhibition was found in the treatment of Aeric Endoaqualfs cogongrass root extract 60% and it was not significantly different from all treatments, apart from the treatment of Pachic Hapludolls cogongrass root extract at 50 and 60% of consentration and Typic Quartzipsamments all concentrations. Propineb fungicide treatment at all concentrations was not different from almost all cogongrass root extract treatments. Based on the results of the analysis above, it appeared that the two observed variables were interrelated. The large inhibition zone value and inhibition percentage tended to cause the small dry weight of A. porri colonies on PDA, the smaller the inhibition zone and the percentage of inhibition, the colony dry weight tended to be greater. This showed that the cogongrass root extract could replace the role of propionebic function. The real effect of cogongrass root extract was possible because of the presence of flavonoid compounds. The cogongrass root extract of Pachic Hapludolls contains flavonoids 420.861 mg L -1 followed by cogongrass root extract of Aquertic Chromic Hapludalfs (369.846 mg L -1 ), of Typic The numbers followed by the same letter in the same column are not significantly different according to the DMRT level of 5%. Flavonoids are a group of polyphenolic compounds in plants commonly found in vegetables, fruit, flowers, seeds, honey and propolis (Ahmad et al., 2015). Flavonoids are formed through the shikimat route and have antimicrobial and antioxidant properties. Meanwhile, Kumar & Pandey (2013) and Kalogianni et al. (2020) said that flavonoid compounds entered fungal cells through holes in the cell membrane that were formed because phenolic compounds have denatured cell membrane lipids. These protein compounds will be denatured by flavonoids through their hydrogen bonds. The ability of flavonoids to bind to proteins causes inhibition of cell wall formation, so that hyphal growth is also inhibited because the required cell wall composition is not fulfilled. Apart from being a structural component, protein also functions as a functional component, namely an enzyme. All metabolic reactions in cells are catalyzed by enzymes which are proteins. These metabolic reactions include important biosynthetic reactions and reactions that produce energy that result in cells being deprived of energy for growth (Maslanka et al., 2020). This results in inhibited hyphal elongation, so the growth of mycelium colonies will be smaller. In Vivo Test. Based on the results of analysis of variance, there are significant differences in the variables of incubation period and disease intensity, infection rate, and AUDPC (Table 2). Pathosystem Components. Symptoms of purple blotch began to appear at the 4 th week after planting. The fastest incubation period was found in the control but it was not significantly different from that of the cogongrass root extracts in Typic Udipsamments and Aeric Endoaqualfs, while the treatment of cogongrass root extracts of other soil types and fungicides was longer, and the longest was the cogongrass root extract of Aquertic Chromic Hapludalfs by 41.85% followed by Pachic Hapludolls at 40.25% compared to control (Table 2). Meanwhile, the incubation period for other soil types was not significantly different from the comparator propineb fungicide, and the application The numbers followed by the same letter in the same column are not significantly different according to the DMRT level of 5%. The data was transformed to (x + 0.5). dai = days after inoculation. before and after inoculation was not significantly different. This was because the chemical compound content of cogongrass root extract from each soil type was not the same. The results of the flavonoid content analysis showed that the roots of the cogongrass in Typic Udipsamments and Aeric Endoaqualfs were the lowest compared to other soil types. This was in accordance with the opinion of Kobayashi (2004) and Yang et al. (2018), that soil factors affected the production of plant secondary metabolite compounds. In line with the fast incubation period, the highest disease intensity was found in shallots without treatment, which was significantly different from all treatments including comparator ( Table 2). The smallest emphasis on purple blotch intensity was found in the cogongrass root extract of Pachic Hapludolls before inoculation of 69.87% compared to control. All treatments of cogongrass root extract did not differ from fungicide treatment, this meant that cogongrass root extract could replace the role of fungicides in overcoming onion purple blotch, with a suppression range between 42.17-69.87% compared to control. This was consistent with the results of the in vitro test on the growth of fungal colonies, and was supported by the higher flavonoid content in the cogongrass root extract, especially from the Pachic Hapludolls soil type, which was 420.861 mg L -1 . The high content of flavonoids contributes to the correlation and may have a role in reducing plant susceptibility by impacting resistance to disease-causing infections (McLay et al., 2020;Shah & Smith, 2020). Furthermore, McLay et al. (2020) explained that UV-B-induced flavonoids could partially mediate the reduction in the phenotype of disease severity, which was negatively correlated with the amount of Bremia lactucae conidia in lettuce plants. The incubation period data showed no significant difference in all treatments of cogongrass root extract, while the disease intensity data were significantly different. This was presumably due to the insufficient dose of cogongrass root extract or the slow mechanism of flavonoids in overcoming symptoms. Meanwhile, when these compounds begin to be absorbed by plants, activity in the face of pathogen attacks begins to appear. This condition was in accordance with the statement of Shah & Smith (2020), that flavonoids were secondary metabolites and biostimulants; which played a key role in plant growth by impacting resistance to certain biotic and abiotic stresses. Time of treatment application had no significant effect between before and after inoculation. In almost all treatments, the application time before inoculation tended to be higher than that applied after inoculation (Table 2). It was suspected that the treatment applied before the inoculation of plant pathogens could serve as a preventive measure. This was in accordance with the opinion of Khalid et al. (2019), that the application of flavonoids, in particular, was related to the protection of plants from pathogen attack and had a very important role in plant resistance to pathogens. Meanwhile, the intensity of the disease tended to increase in line with the increasing age of the shallot plants ( Figure 1). However, the incidence of disease progression was much higher in the untreated compared to treated plants. This was in accordance with the statement of Mierziak et al. (2014) stated that flavonoid compounds were transported to the site of infection and cause hypersensitivity reactions, thus inhibiting disease progression. The highest infection rate was found in the control and the lowest in the root extract of cogongrass Pachic Hapludolls application before inoculation (Table 4). The development of the infection rate during five observations could be seen in Figure 2. The cogongrass root extract was able to compensate for propineb in suppressing the rate of infection. The application of Pachic Hapludolls cogongrass root extract before inoculation was able to slow down the infection rate by 75.13% compared to control. However, overall, the mean infection rate was less than 0.5 per unit per day. According to Van Der Plank (1963), the infection rate value could be defined as whether the pathogen was aggressive, the variety was susceptible or resistant, and whether the environment was favorable or not for the development of the disease. Furthermore, it said that if the value of r was greater than 0.5 units per day, it meant that the pathogen was aggressive, the varieties were susceptible and the weather was favorable. The slow growth rate of infection in the treatment based on the results of the 5% DMRT did not significantly differ between the propineb fungicide and all cogongrass root extracts. There was no significant difference between the applications before and after inoculation. This condition was in line with the incubation period and the intensity of the disease above. This was supported by the important role of flavonoids contained in cogongrass root extracts, which play a role in protecting plants against the attack of biotic and abiotic pathogens (McLay et al., 2020;Shah & Smith, 2020). Haludolls root extract before inoculation showed the highest AUDPC suppressor value of 67.63% compared to control. This was consistent with the suppression of disease intensity and infection rate. The AUDPC value in the cogongrass root extract was not significantly different from the propineb, even the AUDPC value in the propineb fungicide treatment tended to be higher when compared to the treatment of cogongrass root extract Pachic Hapludolls and Aquertic Chromic Hapludalfs. The application of propineb before inoculation caused shallot plants to be relatively more susceptible to pathogens. This was thought to be the nature of the contact propineb fungicide, so that when applied before inoculation it can wash off or evaporate. In accordance with the statement of Majeed et al. (2014) stated that systemic fungicides were more effective at controlling disease severity and disease progression than contact fungicides. This was because of the non-absorption capacity into host tissue, contact fungicides were only effective when applied at shorter intervals (Carmona et al., 2020). AUDPC chart was shown in Figure 3. AUDPC is a parameter to measure the progression of disease severity over a certain time (Apriyadi et al., 2013). The higher the AUDPC value, the lower the resistance level or the percentage of inhibition in the treatment (Gunaeni, 2015). According to Nuryani et al. (2011), if the AUDPC number was lower, the treatment would be more effective in controlling the pathogen, and conversely, the higher the AUDPC number, the treatment will have no effect on pathogen infection. Based on the data above (Table 2), it appeared that the treatment of cogongrass root extract was able to reduce the AUDPC value. Thus, there was a chance that cogongrass root extract could be used to control purple blotch on shallots. Based on the results, it turned out that cogongrass root extract could compensate for the ability of the propineb fungicide in suppressing the development of purple blotch on shallots and impacting resistance of shallots. This was reinforced by the results of the analysis of the total phenol of shallots, namely propineb fungicide (12.97 mg g -1 ), and the cogongrass root extract of Typic Udipsamments (11.51 mg g -1 ), of Aeric Endoaqualfs (12.22 mg g -1 ), of Typic Quartzipsamments (12.84 mg g -1 ), of Aquertic Chromic Hapludalfs (11.66 mg g -1 ), and of Pachic Hapluderts (13.84 mg g -1 ). Growth Components. The range of mean plant height, number of leaves, leaf area, chlorophyll content, and number of tillers showed no significant difference between treatment and control (Table 3). Vegetative growth of plants is more influenced by the availability of nutrients, which function to maintain the survival of a plant. These nutrients include N, P, and K. The availability of nutrients needed by plants results in a better vegetative plant growth and will accelerate the generation of the plant's generative phase (Isda et al., 2013). Aeric Endoaqualfs-S1 Figure 3. Differences in AUDPC values of onion purple blotch due to cogongrass root extract treatment from types of soil. Information: S1= before inoculation and S2= after inoculation. Propineb-S1 Propineb-S2 Typic Udipsamments-S1 Typic Udipsamment-S2 Aeric Endoaqualfs-S2 Aeric Endoaqualfs-S1 Aquertic Chromic Hapludalfs-S2 Aquertic Chromic Hapludalfs-S1 Pachic Hapludolls-S1 Pachic Hapludolls-S2 Typic Quartzipsamments-S1 Typic Quartzipsamments-S2 Table 3. Effect of soil type where cogongrass grows and application time on plant height, number of leaves, leaf area, leaf chlorophyll, and number of tillers Yield Components. All treatments had no significant effect on the number of tubers planted (Table 4). This was thought to be related to the growth component which was not significantly different. According to Sumarni et al. (2012), the number of tillers or the number of tubers was determined more by genetic factors than environmental factors including fertilization. This was also stated by Sekara et al. (2017), that the number of shallot tillers was a plant genetic trait that cannot be easily changed by external factors. The treatment of cogongrass root extract Pachic Hapludolls before inoculation was able to increase the highest plant fresh weight per plant by 42.7% compared to the control (Table 4). In addition, the application of Pachic Hapludolls cogongrass root extract before inoculation increased the plant dry weight per plant and the highest tuber dry weight per plant by 49.6 and 51.92%, respectively, compared to the control. There was no significant difference between propineb fungicide treatment and all cogongrass root extract treatments. This was consistent with the data on the low pathosystem components (Table 2). In line with Table 4, the treatment of cogongrass root extract Pachic Hapludolls before inoculation was able to increase the plant fresh weight per plot, plant dry weight per plot, and the highest tuber dry weight per plot, respectively 66.75, 72.29, and 73.53% compared to control (Table 5). Propineb fungicide treatment was not significantly different from all treatments of cogongrass root extract. However, among the cogongrass root extract treatments, there were differences in tuber dry weight per plot. The cogongrass root extract of Pachic Hapludolls was different from that of Typic Quartzipsamments, Aeric Endoaqualfs, and Typic Udipsamments. When compared to the control, treatment of cogongrass root extract Pachic Hapludolls before inoculation can increase the yield of dry tubers per hectare as 73.53%. The effect of the application of cogongrass root extract on yield components because the cogongrass root extract contains nutrients needed by shallot plants. Hagan et al. (2013) and Isda et al. (2013) added that in addition to producing phenolic compounds, cogongrass also produces nutrients which can be used as growth promoters. Cogongrass roots contain heavy metal compounds such as iron (Paz-Alberto et al., 2007;de la Fuente et al., 2017). The function of iron (Fe) is to play a role in the formation of chlorophyll, Cu is a constituent of enzymes, the formation of chlorophyll, and the metabolism of carbohydrates and proteins (Printz et al., 2016). The numbers followed by the same letter in the same column are not significantly different according to the DMRT level of 5%. CCI = Chlorophyll Content Index. Table 5. Effect of soil type where it is grown and application time on plant fresh weight, plant dry weight, and tuber dry weight per plot Table 4. Effect of soil type where it is grown and application time to the number of tubers, plant fresh and dry weight, and tuber dry weight per plant The numbers followed by the same letter in the same column are not significantly different according to the DMRT level of 5%.
2021-07-06T05:30:41.737Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0cdb26b1e51cd40e49eef75bd74610d1296236e9", "oa_license": "CCBYNC", "oa_url": "http://jhpttropika.fp.unila.ac.id/index.php/jhpttropika/article/download/593/505", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0cdb26b1e51cd40e49eef75bd74610d1296236e9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
211033774
pes2o/s2orc
v3-fos-license
When Back Pain Turns Deadly: An Unusual Presentation of Lung Cancer Back pain is a common presenting concern in physician offices and emergency departments alike, with etiologies ranging from minor injuries to severe life-threatening illnesses. This case details the clinical course of a 68-year-old former smoker with no pulmonary symptoms who presented with back pain multiple times before developing cord compression syndrome and being diagnosed with non-small cell lung cancer (NSCLC). It demonstrates the importance of lung cancer screening and the necessity of monitoring for red flags in cases of back pain. Introduction Lung cancer manifests in a myriad of ways, but according to an analysis of 2293 patients diagnosed with NSCLC, 54.7% had cough and 45.3% had dyspnea at the time of diagnosis, making these the two most common presenting symptoms [1]. However, the patient presented in this case study had no pulmonary signs or symptoms and reported overall good health other than persistent back pain. He subsequently developed cord compression syndrome which led to the diagnosis of NSCLC, and, despite treatment, he died within several months. Although advanced lung cancer with bone metastasis and cord compression has been described, the absence of pulmonary symptoms in this setting is atypical. Case presentation A 68-year-old male veteran with past medical history of HTN, COPD, and former heavy tobacco use presented repeatedly over several months for right-sided back pain suspected by multiple providers to be secondary to a musculoskeletal etiology. Review of systems (ROS) was consistently unremarkable for pulmonary symptoms, his lungs were clear to auscultation, and two CXRs showed no acute abnormalities. After about three months of symptomatic treatment with oral and topical analgesics and a trigger point injection, he again presented to the ED, this time with severe back pain and electric-like radiation across his right chest. He was able to ambulate but reported lower extremity numbness and weakness. He had no saddle anesthesia or incontinence. CT of the thoracic spine showed a right paraspinal soft tissue mass along T4-T6 (Fig. 1). During subsequent thoracic MRI, the numbness in his legs increased, he developed altered sensation from the nipples down, and he was no longer able to ambulate. The MRI revealed acute spinal cord compression secondary to tumor invasion through the T3-T4 and T4-T5 neural foramina (Fig. 2). He was emergently transferred to a higher level of care where he underwent decompression of the spinal canal with T4, T5, and T6 laminectomies and debulking of the tumor on the right side of the spinal canal. During this hospitalization, the ROS remained negative for weight loss, cough, dyspnea, and hemoptysis. Laboratory analysis was only remarkable for macrocytic anemia. The surgical pathology was immediately concerning for aggressive malignancy. After the surgery, the patient began to regain his strength in his lower extremities and was able to ambulate with assistance prior to discharge. Given the thoracic location and the patient's significant smoking history, lung cancer was high on the differential despite his lack of respiratory symptoms. Pathology confirmed that the patient had NSCLC with high PD-L1 expression. EGFR status was indetermine. PET/CT showed metastasis to the distal thoracic spine and bilateral adrenal regions consistent with advanced disease. Under the management of the hematology/oncology department at an outside facility, the patient began chemoradiation with paclitaxel and carboplatin. Per the National Comprehensive Cancer Network guidelines, pembrolizumab is the preferred first-line treatment in patients with PD-L1 greater than 50% and negative or indeterminate driver mutations [2]. In this patient, there was planned radiation and the outside oncologist decided on a platinum-based therapy initially with other treatment options mentioned in subsequent notes. Unfortunately, the patient's clinical status rapidly declined. Repeat imaging revealed extensive liver metastasis (Figs. 3 and 4), growing adrenal metastases, mild to moderate cord compression from enlarging primary tumor, and acute pulmonary emboli in the right middle lobe and segmental right upper lobe pulmonary arteries. Hospice was consulted and the patient died shortly thereafter. Discussion Although lung cancer may present in numerous different ways, it often includes pulmonary symptoms [1]. Unfortunately, bone metastasis is also common with lung cancer, occurring 30-40% of the time [3]. The vertebral bodies are the most common site and bone pain may be an initial symptom of lung cancer 6-25% of the time [4]. When all cancer types are taken into consideration, bone metastasis presents as cord compression about 5% of the time. Lung cancer is second only to breast cancer as a cause of metastatic cord compression and is responsible for approximately 15% of these cases [5]. Current United States Preventative Services Task Force guidelines recommend annual lung cancer screening with low-dose CT (LDCT) scan for adults age 55 to 80 with at least a 30 pack-year smoking history who currently smoke or did smoke within the past 15 years. This is a Grade B recommendation, and it is based on the National Lung Screening Trial from the National Cancer Institute which showed a 20% decrease in lung cancer mortality in the LDCT group as well as a 7% decrease in all-cause mortality, in terms of relative risk. This translates into 4 additional prevented lung cancer deaths for every 1000 and 5 additional prevented all-cause mortalities for every 1000 people screened via LDCT scan when compared to screening with CXR [6,7]. Despite a 40-50 pack year smoking history with recent quit date and good functional status prior to his cancer diagnosis, annual LDCT scan was never discussed with the patient. It is uncertain what the outcome of this patient's disease would have been had he received the recommended screening. Interestingly, the symptom of back pain eventually led to the diagnosis in the absence of more common pulmonary symptoms. Conclusion The American College of Physicians 2007 guidelines for the diagnosis and treatment of low back pain recommend more involved investigations into the etiology of back pain when certain high-risk features are present. Some of these "red flags" include a history of osteoporosis, a history of cancer, weight loss, older age, fevers, neurologic concerns such as incontinence and saddle anesthesia, and also pain that does not improve after 1 month [8]. Although this patient's back pain was in the thoracic rather than lumbar location, his older age and persistent pain were both concerning features, and although CXRs were obtained, the malignancy was not apparent on plain radiography. Providers must therefore be vigilant of pain as an indicator of serious illness, especially in high-risk patients, even if initial imaging is unremarkable. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Manuscript preparation All authors had access to the data and a role in writing the manuscript.
2020-01-30T09:06:17.835Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "83682c35706d0dea827eb0b0225df2c8dd4a4cc9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.rmcr.2020.101009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "717bf0f6cde5964fde838d0b049328b5e3b7f5c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4922922
pes2o/s2orc
v3-fos-license
Improved Detection of Cytokines Produced by Invariant NKT Cells Invariant Natural killer T (iNKT) cells rapidly produce copious amounts of multiple cytokines after in vivo activation, allowing for the direct detection of a number of cytokines directly ex vivo. However, for some cytokines this approach is suboptimal. Here, we report technical variations that allow the improved detection of IL-4, IL-10, IL-13 and IL-17A ex vivo. Furthermore, we describe an alternative approach for stimulation of iNKT cells in vitro that allows a significantly improved detection of cytokines produced by iNKT cells. Together, these protocols allow the detection of iNKT cell cytokines ex vivo and in vitro with increased sensitivity. Results Influence of the fixation method on cytokine detection. Following activation with αGalCer in vivo the majority of iNKT cells produced IL-4, which can be detected directly ex vivo, meaning without the need for TCR cross-linking or pharmacologic activators, and without a requirement for culture in the presence of blockers of protein transport through the Golgi apparatus (Fig. 1A). However, the intensity of the staining tended to be low ( Fig. 1A and data not shown) and this can at times make the discrimination of positive events difficult. Therefore, we tested several alternatives for staining and fixation to improve the intracellular staining for IL-4, combined with the use of different fluorophores. We found that fixation of the cells with Cytofix/Cytoperm for 10 minutes at 37 °C, instead of the recommended 4 °C, also significantly increased the staining intensity for most IL-4 conjugates, without negatively affecting surface staining ( Fig. 1A and data not shown). This was seen with both of the αIL-4-antibody clones, 11B11 and BVD6-24G2, that were tested (Fig. 1A). The increased staining intensity allowed significantly more iNKT cells to be detected as IL-4 + in the case of FITC-and PE-Cy7-conjugated antibodies, but not in the case of AF647-and PE-CF594-conjugated antibodies ( Fig. 1A and Supplementary Figure 1A). A similar variability in the percent of the activated iNKT cells classified as cytokine positive also was noted for IL-2 and IL-13 staining. This depended on the antibody conjugates tested, and for some of the conjugates, fixation at 37 °C led to an increased staining intensity (Fig. 1B). In contrast, no difference in the staining intensity of other cytokine tested, namely GM-CSF, IFNγ, IL-10, IL-17A and TNF, was observed (Fig. 1B, Supplementary Figure 1B, and data not shown). Importantly, changing the temperature of the fixation step did not negatively affecting the surface staining of any of the tested markers (Supplementary Figure 2). Therefore, fixation of activated iNKT cells at 37 °C instead of 4 °C is preferable for IL-2, IL-4 and IL-13 detection. Scientific REPORTS | 7: 16607 | DOI:10.1038/s41598-017-16832-1 iNKT cell IL-17A requires in vitro cytokine accumulation. In recent years, functional subsets of iNKT cells have been defined 7,8 . The definition of iNKT cell subsets is largely based on their expression of transcription factors and their function, especially significant biases in cytokine production of the respective iNKT cell types. NKT1 9 , NKT2 9,10 and NKT17 [11][12][13][14] cells are defined as the iNKT cell subsets biased towards T h 1, T h 2 or T h 17 cytokines, respectively. The underlying gene programs are imprinted during thymic development 15 . NKT10 cells were characterized by IL-10 production [16][17][18][19][20] . NKT and FoxP3 + iNKT 24,25 cells were defined based on their similarities with T FH and FoxP3 + T cells, respectively. However, the detection of IL-10 and IL-17A production by activated iNKT cells of the appropriate functional subtype is particularly poor when the cells were analyzed directly ex vivo (Figs 2A, 3A and data not shown). For the detection of cytokines produced by conventional, MHC class II-reactive T cells, an in vitro incubation of the cells after purification in the presence of Golgi-transport inhibitors is routinely used to improve cytokine detection 26 . We adopted this method for the detection of IL-17A production by iNKT cells. Mice were injection i.v. with αGalCer and 90 min later splenocytes were obtained and cultured for 2 h in the presence of Golgi-transport inhibitors. As shown in Fig. 2A, the IL-17A-producing subset is relatively infrequent in the spleen, but importantly, the in vitro accumulation of biosynthesized IL-17A was required for the effective detection of IL-17A + iNKT cells. For several other cytokines tested, namely GM-CSF, IFNγ, IL-2, IL-4 and IL-13, a 2 h in vitro incubation resulted in a marked increase in the percentage of cytokine-positive iNKT cells (Fig. 2B). Also for IL-10 an improvement in cytokine staining was observed, which, however, did not reach statistical significance in all experiments (data not shown). In contrast, no difference was evident for TNF after a 2 h in vitro incubation (Fig. 2B). Furthermore, extending the in vitro incubation beyond 2 h did not improve cytokine detection further (data not shown). Therefore, in vitro culture of in vivo stimulated iNKT cells for 2 h in the presence of Golgi-transport inhibitors is required for efficient IL-17A detection and clearly improves the detection of most other iNKT cell cytokines. Effective detection of iNKT cell IL-10 requires dead cell removal. IL-10-producing iNKT cells are a relatively small subset in the spleen, but they are enriched in adipose tissue and increased long term after strong or repeated antigenic stimulation 19 . We noticed previously that the maximal number of IL-10 + iNKT cells could be detected after stimulation in vitro with PMA and ionomycin 19 . However, when we compared the IL-10 staining after PMA/ionomycin stimulation in vitro in iNKT cells from splenocytes and peripheral blood mononuclear cells (PBMCs) we noted a clearly stronger IL-10 staining in iNKT cells derived from PBMCs compared to splenocytes (Fig. 3A). As we did not expect such a difference in the iNKT cells present in PBMCs compared to the spleen, we tested if the different purification methods employed could account for the observed difference. Whereas splenocytes were utilized directly after the single cell suspension was obtained, PBMCs were first purified via a density-gradient to remove red blood cells and dead cells. Therefore, we compared the IL-10 staining Effective detection of iNKT cell IL-17A ex vivo required cytokine accumulation in vitro. C57BL/6 mice were either mock treated or injected i.v. with 1μg αGalCer and 90 min later expression of the indicated cytokines by splenic iNKT cells was analyzed by ICCS. Cells were either stained directly ex vivo (ex vivo) or after a 2 h in vitro incubation at 37 °C in the presence of the Golgi-transport inhibitors Brefeldin A and monensin (+2 h). (A) Intracellular IL-17A produced by gated iNKT cells is depicted against CD4 for representative data, and as a summary graph (left panels). (B) Production of indicated cytokines by iNKT cells is depicted as a summary graph (left panels) and representative data (right panels). ns = not statistically significant. Representative data from one of at least three independent experiments are shown. after PMA/ionomycin stimulation in vitro of splenic iNKT cells that were used either directly ex vivo or after purification via a density-gradient. As shown in Fig. 3B the IL-10 staining in iNKT cells significantly improved, with regard to the percentage of IL-10 + cells detected as well as to the intensity of the staining, when dead cells were removed from the splenocytes prior of the stimulation. Given these results, we tested if the removal of dead cells would also allow an improved detection of IL-10 + iNKT cells ex vivo after αGalCer injection. Mice were injected i.v. with αGalCer and 90 min later splenocytes were obtained and analyzed either directly ex vivo or after purification via a density-gradient. To allow for accumulation of IL-10 in the iNKT cells, the splenocytes were cultured for 2 h in vitro in the presence of Golgi-transport inhibitors. Again, the purification via a density-gradient allowed an improved detection of IL-10 + cell iNKT cells (Fig. 3C). Therefore, for the optimal detection of IL-10, the initial removal of dead cells via a density-gradient and incubation in vitro in the presence of Golgi-transport inhibitors were required. Dead cell removal allows for improved detection of multiple cytokines. Whereas the large majority of iNKT cells produce cytokines following activation with αGalCer in vivo, on a per cell basis their response after in vitro stimulation with αGalCer is weaker ( Fig. 4 and data not shown). Given the clear improvement of the IL-10 staining by the elimination of dead cells, we tested whether a similar approach would improve cytokine detection by iNKT cells following in vitro stimulation with αGalCer. C57BL/6 splenocytes were either left untreated or purified by a density-gradient before the cells were incubated in vitro for 5 h in the presence of αGalCer and Golgi-transport inhibitors. As shown in Fig. 4, although the optimal in vitro stimulated responses did not reach the intensities observed when cells were analyzed ex vivo, significantly more iNKT cells from the gradient-purified splenocyte population scored positive for cytokine production after αGalCer stimulation. Additionally, the intensity of the cytokine staining obtained tended to be higher in iNKT cells from purified splenocytes (Fig. 4). The purification of splenocytes by a density-gradient either after αGalCer in vivo stimulation followed by a 2 h in vitro culture (Supplementary Figure 3A) or before in vitro stimulation with PMA and ionomycin (Supplementary Figure 3B) also allowed for increased detection of cytokine-positive iNKT cells. This increase in cytokine-positive iNKT cells was statistically significant for most of the cytokines. Altogether, the removal of dead cells by a density-gradient before in vitro culture allows for clearly improved cytokine detection in iNKT cells by ICCS. Kinetics of iNKT cell cytokine production. Having established an optimized protocol for the detection of iNKT cell cytokines at the single cell level, we tested its utility by measuring the induction of cytokine production by iNKT cells over time. To this end, C57BL/6 splenocytes were stimulated in vitro with either PMA/ionomycin or with αGalCer. The cytokines produced by iNKT cells were measured between 0.5-4 h after stimulation with PMA/ ionomycin or between 1-5 h after stimulation with αGalCer. GM-CSF, IFNγ, IL-2, IL-4, IL-10, IL-13, IL-17A and TNF were measured in parallel by ICCS. Following stimulation with PMA/ionomycin, the percentage of iNKT cells producing any of the cytokines measured reached at least 50% of the maximal response after 2 h, with IL-10 constituting the exception requiring 3 h (Fig. 5). Although most splenic iNKT cells in C57BL/6 mice have been reported to be NKT1 cells 9 , and the highest frequency of the PMA/ionomycin stimulated cells produced TNF, a high percentage of the cells also produced IL-4, while relatively few cells were positive for IL-2 or IL-13. Therefore, a rapid, multi-cytokine response was elicited by the strong stimulation achieved by PMA/ionomycin. As expected, the stimulation with αGalCer showed a slightly delayed response. Cytokine production reached more than 50% of the maximal response after 3 h, rather than 2 h. A large proportion of the cells produced IL-4, even larger than the percentage that produced TNF, after antigen stimulation, with a reduced percentage producing IFNγ (Fig. 5). Furthermore, the standard deviation of the cytokine values following αGalCer stimulation tended to be larger than after PMA/ionomycin stimulation. For both methods of stimulating iNKT cells, the IL-10 response included the fewest cells, and it also was the slowest to rise. Strain comparison of cytokine production. Immune responses in BALB/c mice generally are more biased to Th2 than in C57BL/6 mice 27 . In agreement with this, it has been reported that more Th2-like NKT2 cells are present in the thymus of BALB/c than in C57BL/6 mice 9 . We compared iNKT cell production of GM-CSF, IFNγ, TNF, IL-2, IL-4, IL-10, IL-13 and IL-17A in these two strains. C57BL/6 or BALB/c splenocytes were stimulated in vitro with either PMA/ionomycin or with αGalCer and the cytokines produced by iNKT cells were measured after 0.5-4 h for PMA/ionomycin or 1-5 h for αGalCer. Interestingly, the cytokine response was not significantly different for iNKT cells derived from C57BL/6 ( Fig. 5) or BALB/c (Fig. 6A,B) splenocytes, irrespective of the in vitro stimulation method. To verify that this comparable response was not the result of the in vitro conditions, we stimulated C57BL/6 and BALB/c mice in vivo with αGalCer for 90 min and measured the iNKT cell cytokine response by ICCS. Under these conditions the cytokine response of BALB/c derived iNKT cells tended to be lower for all tested cytokines than those of C57BL/6 derived iNKT cells (Fig. 6C). However, this difference was small and reached statistical significance only for IL-4, IFNγ and TNF. Together, these data suggest that the cytokine response of splenic iNKT cells is largely comparable in C57BL/6 and BALB/c mice in vivo and in vitro. Discussion The copious amount of some cytokines, like IFNγ, TNF, IL-4 and IL-13, produced by iNKT cells in vivo makes it possible to detect and quantify them directly ex vivo. However, this common practice is suboptimal for other cytokines like IL-2, IL-10, IL-17A and GM-CSF. We describe here an optimized protocol for the detection of iNKT cell cytokines ex vivo. Furthermore, we describe an improved protocol for the in vitro stimulation that allows a significantly improved detection of iNKT cell cytokines. Detailed 'step-by-step' procedures for these protocols are provided in the Supplemental information. We first noted that the temperature of the fixation (4 °C vs. 37 °C) significantly increased the detection of IL-2, IL-4 and IL-13 produced by iNKT cells, without negatively affecting the detection of other cytokines or staining for molecules on the cell surface (Fig. 1A). The reason for this difference, however, is not clear yet. Furthermore, similar to conventional T cells, the in vitro incubation of iNKT cells after in vivo stimulation in the presence of Golgi-transport inhibitors significantly improved the detection of the cytokines GM-CSF, IFNγ, IL-2, IL-4, IL-13 and IL-17A (Fig. 2). Interestingly, the purification of splenocytes by a density-gradient was essential for the efficient detection of IL-10 + iNKT cells (Fig. 3). Furthermore, such purification before the in vitro stimulation also significantly improved the detection of other iNKT cell cytokines (Fig. 4). The effect of the density-gradient centrifugation is likely due to the removal of dead and apoptotic cells. Thus, our data on the functional impairment of iNKT cells during in vitro cultures are in line with a report showing that iNKT cells are sensitive to cell death induced by NAD released from apoptotic cells 28 . One surprising result of the data presented is the largely comparable cytokine production of splenic iNKT cells derived from C57BL/6 and BALB/c mice in vivo and in vitro (Figs 5 and 6). Immune responses in the BALB/c mice are generally more biased to Th2 than in C57BL/6 mice 27 . In agreement with this is the finding that in BALB/c mice more Th2-like NKT2 cells are present than in C57BL/6 mice 9 . However, in that study 9 cytokine data where only reported for the thymus and not for the spleen. Therefore, organ specific differences might account for the strain dependent differences observed previously in the thymus 9 and by us for the spleen. Additionally, NKT2 cells were reported to be located preferentially in the T cell zones of the white pulp of the spleen 29 , and are therefore less easily activated by antigens injected by the i.v. route 29 . This might explain the lack of a marked difference between C57BL/6 and BALB/c mice we observed in vivo, but cannot explain the similar outcome we obtained with in vitro stimulated cells. The later finding is surprising as the induction of the transcription factor Nur77, which acts as a faithful marker for TCR-engagement in iNKT cells 30 , was reported to be equally induced in splenic NKT1 and NKT2 cells following an in vitro stimulation 29 . The reason for this discrepancy is currently not know. Nonetheless, our study suggests that the Th2-bias in the BALB/c mouse does not extend to splenic iNKT cells. In summary, the described protocols allow the improved detection of iNKT cell cytokines by ICCS ex vivo and in vitro. The alterations to the protocol outlined here, were not tested for conventional T cells. However, it is possible that some aspects are transferable to conventional, peptide plus MHC class II-reactive T cells. Material and Methods Mice. All mice were housed under SPF conditions in the vivarium of the La Jolla Institute for Allergy and Immunology (LJI, La Jolla, USA) or the Izmir Biomedicine and Genome Center (IBG, Izmir, Turkey) in accordance with the respective institutional animal care committee guidelines. C57BL/6 and BALB/c mice were purchased from the Jackson Laboratories (Bar Harbor, ME).
2018-04-03T02:00:16.474Z
2017-11-30T00:00:00.000
{ "year": 2017, "sha1": "289fb56a5f41a92cdf6f67d1b8eba24c5081aafa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-017-16832-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b5a69fff4588d06efa967f79a1611220f2c3504", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
56430320
pes2o/s2orc
v3-fos-license
Editor Note: Medical Microbiology & Diagnosis Research by El-Banna et al. studied the effects of antihistamine drugs on adaptations of Gram-negative bacteria to resist antimicrobial agents were investigated. Marked synergism was detected in multidrug resistant (MDR) Klebsiella pneumoniae isolates when the ethanolamine antihistaminic, diphenhydramine, was used in combination with a variety of antibiotics such as azithromycin, erythromycin, amikacin, gentamicin or ciprofloxacin. This information can be useful in the treatment of infectious diseases, especially in this era of emerging multi-drug resistant strains [2]. Editorial The current issue (Volume 5, Issue 2) of JMMD contains 12 articles, which consist of seven research articles, one case report, three editorials and one review article. For your convenience, below are a few words to let you know what I think you can expect from each of these articles. Research by Pillai and mini studied polysaccharides are potential candidates for development of vaccines. In this research, capsular polysaccharide virulence factor of Pasteurella multocida (DP1) was isolated and structurally characterized by infrared (IR) spectroscopy and by nuclear magnetic resonance (NMR) spectroscopy. This analysis will help facilitate anti-Pasteurella vaccine design [1]. Research by El-Banna et al. studied the effects of antihistamine drugs on adaptations of Gram-negative bacteria to resist antimicrobial agents were investigated. Marked synergism was detected in multidrug resistant (MDR) Klebsiella pneumoniae isolates when the ethanolamine antihistaminic, diphenhydramine, was used in combination with a variety of antibiotics such as azithromycin, erythromycin, amikacin, gentamicin or ciprofloxacin. This information can be useful in the treatment of infectious diseases, especially in this era of emerging multi-drug resistant strains [2]. In leprosy, the lepromatous form of the disease is more severe and results from suppression of host T cell response. T regulatory cells, which suppress the T cell response, have been found in greater frequency in the blood of leprosy patients and at the site of leprosy infection. The present study was carried out to evaluate the role of Mycobacterium leprae antigen phenolic glycolipid 1 (PGL-1) in the induction of T regulatory cells in leprosy patients were well described by Bhavyata Dua [3]. Dilnessa et al. discussed that approximately 20 million cases of severe sepsis occur each year throughout the world. Blood cultures play a major role in the diagnosis and management of those infections. This article reviews the principles, technical requirements and limitations of current blood-culture techniques [4]. Weltman investigated the design and production of an anti-Zika virus (ZIKV) vaccine is important because of the association of infection by ZIKV with microcephaly in the developing fetus. Reported here is a bio-informatic analysis of the ZIKV envelope E protein based on both Shannon entropy and B cell epitope prediction. This analysis aims to identify loci within ZIKV E protein with the potential to serve as targets for protective anti-ZIKV vaccines [5]. Editorial by Kandi provides an overview of human infections caused by Bacillus species other than Bacillus anthracis and the reasons for their probable under-reporting. It is suggested that many clinical microbiology laboratories ignore these bacteria as laboratory contaminants and that careful clinical and laboratory evaluations are required in order to determine the actual role of Bacillus species bacteria in causing infections in humans [6]. Another Editorial by Zaghloul discussed about the emergence of strains of Staphlococcus aureus bacteria that are resistance to methicillin (MRSA) has become a global problem. The rapid and accurate detection of MRSA is essential for the clinical management and treatment of MRSA-infected patients. This article is a concise, insightful and clinically useful overview of the biological principles underlying the MRSA problem [7]. Editorial by Molina and Basualdo explained about the global review of ZIKV from Argentina, where human-to-human transmission, but not mosquito-to human transmission of the virus has been observed. This publication considers ZIKV epidemiology from both regional and global points of view [8]. Case Report by Henry et al. studied about two hepatic cysts, one in a man with chronic hepatitis B, another in a woman with non-Hodgkin's lymphoma. One of the cysts undergoes spontaneous regression but the other undergoes repeated aspirations without benefit [9]. The strictly anaerobic, spore-forming, Gram positive bacterium Clostridium difficile is a leading cause of healthcare-associated infections. This study demonstrates that when C. difficile cells are exposed to germination solutions that contain the bile salt sodium taurocholate, those vegetative cells tend to undergo prolonged germination on the surfaces they contaminate. The germinating C difficile cells thus become susceptible to disinfectants applied to the contaminated surfaces. This research presents an original approach to the prevention of the C difficile infections that tend to occur in the healthcare environment which is well described by Worthington Tand Hilton Ac [10]. Research Article by Tarhan, et al. described that the COBAS Amplicor is an automated PCR system for the rapid detection of RNA and DNA target sequences. This article addresses the use of various decontamination-homogenization-concentration techniques to enable the timely identification of the Mycobacterium tuberculosis complex (MTBC) in clinical samples using the COBAS Amplicor. Such timely identification plays an important role in the control of the spread of tuberculosis and its effective treatment [11]. Research Article by Walid, et al. is a retrospective cohort study of 122 children (age less than 1 year to age 18 years) with various pulmonary, other respiratory and optic infections. The children were treated within a 5-year interval (Jan 2009 to Jan 2015) at the King Abdulaziz University Hospital, a teaching hospital situated in Jeddah, Saudi Arabia. Reported is a laboratory and statistical analysis of the emergence of antibiotic resistance in the numerous types of microorganisms that were isolated. This study helps to contribute to t
2019-02-16T14:30:28.405Z
2016-06-27T00:00:00.000
{ "year": 2016, "sha1": "a31ac118b1b0397df321737178f2d70e4058d458", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/editor-note-medical-microbiology--diagnosis-2161-0703-1000e133.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "62f2fe4dd4fdc3759edc294e2e07a6b97b45ad6f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38417004
pes2o/s2orc
v3-fos-license
Liver Transplantation at KFSHRC: Achievement and Challenges The liver transplantation program at KFSHRC has been active since 2001. More than 450 liver transplants have been performed so far. The program evolved from adult cadaveric transplant to living donor and recently to pediatric and split techniques. The 1-year survival of patients for both pediatric and adult exceeded 90% and the 5-year survival of patients is more than 80%. Associated with this success are challenges that include: organ shortage, quality of organ harvested, inability to meet the growing national need, increased demand of resource to meet the need of the program, and lack of a collaborative national strategy in organ donation and transplantation. T hough accurate epidemiological studies in Saudi Arabia (SA) on liver disease are generally lacking, it is estimated that between 700 and 1200 patients need liver transplantation annually. The need for transplantation forced patients to seek transplants abroad with inferior outcomes occasionally. 1 Liver transplantation in SA was started in the early nineties to meet the need of patients with an end-stage liver disease. 2,3 Several programs were started at a low scale. Liver transplantation at King Faisal Hospital and Research Center (KFSHRC) was launched in 2001 and is continued to be active till date. Currently, the program performs living and cadaveric transplantation in children and adults. Recently, a split program was introduced. In this article, we describe the general performance of the split program over more than a decade, focusing specifically on mortality timing as an indicator of the quality of care at different stages of the transplant process. We also try to highlight the challenges faced from donor and recipient perspectives as well as what lies ahead in the context of the national need and available resources. The liver transplantation program at KFSHRC has been active since 2001. More than 450 liver transplants have been performed so far. The program evolved from adult cadaveric transplant to living donor and recently to pediatric and split techniques. The 1-year survival of patients for both pediatric and adult exceeded 90% and the 5-year survival of patients is more than 80%. Associated with this success are challenges that include: organ shortage, quality of organ harvested, inability to meet the growing national need, increased demand of resource to meet the need of the program, and lack of a collaborative national strategy in organ donation and transplantation. Method and Result We retrospectively analyzed the patients who were transplanted from 2001 until the end of 2012. We specifically looked at the following 5 mortality indicators: mortality within 24 hours, mortality within hospital stay, mortality with normal functioning graft, mortality without normal functioning graft, and late mortality (after discharge). The total number of transplants were 478 (adult: 387 and pediatric: 91). Thepatients were stratified on the basis of the year they were transplanted. The mortality in each of patient cohorts was calculated and the patients were stratified according to the 5 mortality categories. Table 1 summarizes mortality data related to the timing, and Table 2 indicates the cause of death. Figure 1 summarizes the indication for transplant. The subgroup analysis of adult living donor stratified to MELD (model for end-stage liver disease) above and below 25 showed mortality of 15.4% versus 31%, respectively. The most common indication in adults was hepatitis C cirrhosis (38%). Hepatocellular was the main indication in almost 20% of the cases. Milan criteria were generally applied 4 and were extended to the University of San Francisco for living donor liver transplant. 5 The number of referrals to KFSHRC for liver transplant exceeds 650 annually, and between 55% and 65% cases are accepted for evaluation. The average number of the cadaveric waiting list is between 60 and 80, with the waiting list mortality of 20% to 30%. Initially only cadaveric transplant was performed; early on living donor liver transplant was introduced, which made up one third of the total number of transplants. In the last 2 years, living donor liver transplant made more than two third of the total number of transplants. Other salient feature of the program included the introduction of split-liver transplantation, which was performed in 3 donors. Re-transplantation was performed in 14 patients (2.9%) with the 65% survival rate. The Kaplan-Meier graft and patient survival for the whole program up to 2011 is shown in Figure 2. Discussion Starting and sustaining a successful liver transplantation program is a major undertaking for any in-stitution no matter how advanced its setup is. At KFSHRC, the quality of the program has been comparable to the international standard since its inception in 2001. The scaling up of the program was difficult early on; however, after easing of many administrative and logistical obstacle, the number of transplant has doubled over 2011 to 2012 and continued to grow maintaining the same quality. The demand of the program has increased with the increasing number of referrals. Though the need is partially met by living donor liver transplant, the situation with cadaveric transplant is not as good. Patients have been accumulating on the waiting list, with more than one third either die or get delisted as a result of severe organ shortage and late referrals. Our data also indicated an inferior result of living donors in high MELD recipients who were referred late. The pediatric population not having a living donor has been a major problem that the program faced. Though split graft is becoming a standard of practice to the extent of questioning the need for pediatric living donor liver transplant, 6 the case in SA is not the same. Though we efficaciously managed to do a successful split, the scarcity of organs combined with the poor quality of these organs made split grafting a rare occurrence. The quality of organ in SA remains to be problematic secondarily to logistical issues. As a matter of fact, when the program was managed by an overseas team in 1994, the rate of primary nonfunction was as high as 25% with a much higher graft dysfunction. 7 Several authors uncovered donor risk factors that may influence the outcome of liver transplantation. These risk factors in general are as follows: donor demographics, donor disease, donor cause and mechanism of death, and allocation factor. 8 A composite risk factor was also proposed. 9,10 It remains to be seen whether these factors are applicable to the donor population in Saudi Arabia wherein issues in the medical management of donor seems to be more important. It is estimated that more than 250 organs could be produced annually, wherein currently the number of cadaveric liver transplant did not exceed 100 in the country. 11 A joint effort between Saudi Center for Organ Transplantation, Ministry of Health hospitals, and Transplant Center need to take a much more accelerated pace to streamline the organ donation process at the national level. 12 It is unlikely that living donor liver transplant will meet the need of patients with the end-stage organ failure in SA; it is also not possible that one institution will be able to cope with the burden of a large liver transplantation program. It is suggested that an aggressive and proactive approach toward organ donation be pursued to support the 4 liver transplantation programs in SA and that adequate resources be made available to them to meet the needs for liver transplant for patients with the end-stage liver failure. Conclusion The experience at KFSHRC with the liver transplantation program demonstrated that major medical institutions in SA can sustain a successful liver transplantation program with the quality comparable to the programs in Europe and North America. A major reform in the organ donation program logistics needs to be undertaken as soon as possible to make use of the big potential of cadaveric donation. Major medical institutions are required to understand the needs for their transplantation programs in terms of resource and administrative support. The mechanism to re-ward these governmental medical institutions, which has chosen to harness liver transplantation, need to be established by decision makers at a higher level.
2018-04-03T04:17:33.369Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "64ba8e31e5bf169cb071f9e22e3dfbf85490a4b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParseMerged", "pdf_hash": "64ba8e31e5bf169cb071f9e22e3dfbf85490a4b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267353641
pes2o/s2orc
v3-fos-license
THE INFLUENCE OF MOTIVATION AND WORK ABILITY ON EMPLOYEE PERFORMANCE (STUDY CASE AT PT. JASAMARGA TOLLROAD OPERATOR) This research aims to determine the influence of motivation and work ability on employee performance at PT Jasa Marga Toll Road Operator. The population in this research was 150 employees who worked at PT Jasa Marga Toll Road Operator, while the sample in this research used cluster random sampling. So the units selected as samples were not individuals, but organized groups of individuals, where the samples were all 120 operational employees. The results of multiple linear regression analysis show that motivation and work ability have a positive and significant effect on employee performance. For motivation variable, hoped that there will be an opportunity to provide promotions to employees by simplifying the conditions and stages in the consideration and selection process provided. Meanwhile, for the work ability variable, hoped that the Company can become a forum that facilitates employees to improve their abilities. Apart from that, companies also need to provide opportunities for employees to provide input and responses in resolving the problems and challenges. INTRODUCTION The Industrial Revolution 5.0 will create new technologies that are sophisticated and can help the work within the company, with the creation of this new technology it will help human resources in dealing with various types of work, and more quickly in carrying out tasks, with new breakthroughs and With new advances in technology there will be many new things to learn (Hargadon, 2003).However, this will give rise to various challenges that will be faced by Human Resources (HR).The main challenge is that human resources will be replaced by technology (Agarwal et al., 2022).This problem is certainly being a threat to Indonesia, because Indonesia still has a fairly low level of human resource quality (Debrah et al., 2000).Based on human resources research by "Human Development Indices and Indicators 2019", Indonesia is in 115th position out of 189 countries.And if it was compared with neighboring countries, such as Singapore which is in 8th position, Malaysia in 57th position, Brunei Darussalam in 40th position, and even Australia which has been able to occupy 3rd position.Based on the data, it can be concluded that Indonesia still has low quality of human resources (Baharin et al., 2020) In the service industry, consumer satisfaction is greatly influenced by the quality of interactions between consumers and employees who makes service contacts (Hartline & Ferrell, 1996).The example of service field that is highly depends on the role of HR performance is toll road services.The Kayuagung -Palembang toll road is a toll road in South Sumatra and connects Kayuagung to Palembang (Kapal Betung) with total road section is 111.69KM.Construction of this toll road section began in August 2016 and consists of 3 sections.The concession owner of this toll road is PT.Waskita Sriwijaya Toll (Wraharjo et al., 2022). PT. Waskita Sriwijaya Toll used third party for two kind of service.First is PT.Waskita Karya which functions as the contractor or company that builds the toll road, and also PT.Jasa Marga Toll Road Operator (JMTO) which is the company that operates the toll road.The three companies synergize with each other to create the best toll services for public (Thorson & Moore, 2013). Being one of the companies engaged in service field, PT.Jasa Marga Toll Road Operator (JMTO) is a company engaged in the field of toll road operations.PT.Jasa Marga Toll Road Operator was founded on August 21 2015, initially named PT Jasa Service Operations (JLO).JMTO is a subsidiary of PT Jasa Marga (Persero) Tbk with a share composition of 99.9 percent owned by PT Jasa Marga (Persero) Tbk and 0.1 percent owned by the parent Jasa Marga Employee Cooperative.PT Jasa Marga Tollroad Operator (JMTO) itself consists of several branches spread throughout Indonesia, one of which is Palembang branch.Based on the data in Table 1.1, it shows that the number of employees at PT. Jasa Marga Tollroad Operator (JMTO) in 2023 were 150 employees.The traffic department has 77 employees, the transactions department has 25 employees, meanwhile the maintenance department has 48 employees. Performance achievements at PT Jasa Marga Toll Road Operator are assessed from several service areas, which are divided into transaction, traffic and maintenance services and where these service areas also have several aspects of their own assessment.Based on this data, when compared during 2020 -2022, the realization of performance achievements from various fields tends to decrease from year to year, with the lowest performance achievements from all fields being in 2022.This shows that the performance achievements of PT.Jasa Marga Toll Road Operator has not met the work targets that already set by company. Recapitulation of the response time data of PT.Jasa Marga Toll Road Operator employees regarding handling obstacles in the field, the data shows that the average employee response time from 2020-2022 tends to fluctuate, and in 2021 and 2022, the response time of PT.Jasa Marga Tollroad Operator employees has exceeds the maximum limit of Standard Operational Procedures (SOP) that already set by company.The phenomena indicates that employees increasingly need more time to follow up on their work. According to one of the employee performance indicators by Robbins (2018), namely punctuality, where employees should be able to complete their tasks according to the agreed time or even faster, and maximize the time they have to carry out other tasks, but in the fact, PT.Jasa Marga Tollroad Operator employees are not able to work according to the standards that have been set, and this indicates that the performance of PT.Jasa Marga Tollroad Operator employees is still not good because they didn't optimized their working time. Another phenomena shows that the performance of PT.Jasa Marga Tollroad Operator employees is still not good, that in 2019 PT.Jasa Marga Tollroad Operator also operates toll road of Terbanggi Besar-Kayuagung toll road owned by PT.Hutama Karya, and the Kayuagung-Palembang toll road belong to PT Waskita Sriwijaya Tol, however, the contract between PT.Hutama Karya and PT.Jasa Marga Tollroad Operator only lasted for 1 year, and after that, the contract was not extended again.This was caused by the performance of employees from PT. Jasa Marga Tollroad Operator which PT. Hutama Karya felt was still unsatisfactory, so PT.Hutama Karya looked for another vendor to replace the duties of PT.Jasa Marga Tollroad Operator. There are causes for the decline in employee performance at PT. Jasa Marga Tollroad Operator.After reviewing it, several phenomena emerged that caused a decline in employee performance, one of which was caused by a lack of employee motivation.The problem is related to the status of promotion given by PT.Jasa Marga Toll Road Operator.Only 1 employee received a promotion during 2020 -2022, namely from the transactions department only.Meanwhile, for the traffic and maintenance department, no one has received a promotion.This could be the cause of the performance of PT.Jasamarga Tollroad Operator 157 Asian Journal of Engineering, Social and Health Volume 3, No. 1 Januari 2024 employees being less than optimal due to a lack of motivation to work because they are not appreciated by promotion. Apart from the reasons for the low performance of employees provided by PT.Jasa Marga Tollroad Operator, researchers chose PT.Jasa Marga Tollroad Operator as the research object because PT.Jasa Marga Tollroad Operator has met the qualifications that researchers need, that having 120 employees so that it meets the standards for can be used as an object for research.Seeing the condition of the problems at PT. Jasa Marga Tollroad Operator which attracted attention for further research, and after seeking the views of experts and previous research which showed that employee motivation, work ability and performance were interconnected. RESEARCH METHODS The population in this study were employees who worked at PT. Jasa Marga Toll Road Operator who had different positions, positions, levels and demographic conditions, totally 150 peoples, while the sample in this study used cluster random sampling.Random sampling based on area or cluster random sampling is a sampling method used where the population does not consist of individuals, but rather consists of groups of individuals or clusters.So, the units selected as samples were not individuals, but organized groups of individuals, where the samples were all operational employees of PT.Jasa Marga Tollroad Operator were 120 peoples. The appreciation dimension shows the average percentage of answers Agree (31.48%) and Strongly Agree (41.66%).This shows that the award for work carried out in accordance with the results of the work produced is included in the "Pretty Good" category.It shows that employees will provide good work results in accordance with company expectations.However, there are indications that some employees still feel that there is a lack of opportunities for employees to get promotions or positions during their work.Employees feel that providing opportunities for promotion or position must go through a process that requires quite a lot of conditions and stages that employees must go through, it still felt to make it difficult for employees to get this opportunity. The self-actualization dimension shows the average percentage of answers Agree (28.98%) and Strongly Agree (33.16%).This shows that the self-actualization of employees at PT. Jasa Marga Toll Road Operator is in the "Pretty Good" category.Employees feel that the company provides opportunities for employees to develop abilities to support their work.However, with there still being respondents who stated "Disagree", there is an indication that some employees feel that they are not given the opportunity to express criticism of something.It means that employees are not given the freedom to express opinions in the realm of negative responses. The Influence of Work Ability on Employee Performance at PT. Jasa Marga Toll Road Operator The analysis results obtained in this research show that work ability has a positive and significant effect on employee performance so that the second hypothesis can be accepted.The results of the frequency of respondents questionnaire answers show that work ability has a positive and significant effect on employee performance.This research is in line with research results from (Al-Omar et al., 2019)a; (Jørgensen et al., 2011); (Wasiman et al., 2023); (Raudeliūnienė & Matar, 2023); (Rezeki, 2023)shows the results that work ability has a positive and significant effect on employee performance. In the intellectual ability dimension at PT Jasa Marga Toll Road Operator, the average percentage of answers Agree (37.66%) and Strongly Agree (35.18%).It shows that employees are able to recognize a logical sequence in a problem.However, with respondents who stated "Disagree", there is an indication that some employees felt they did not understand what they heard, which resulted in a misscom.There is a need to provide clearer information in writing or verbally, so that what you want to be informed about will provide an appropriate response. The cognitive ability dimension shows the average percentage of answers Agree (42.16%) and Strongly Agree (38.34%).This shows that employees at PT Jasa Marga Toll Road Operator are able to learn and are in the "Good" category.However, with there still being respondents who stated "Strongly Disagree" and "Disagree", there is an indication that employees are able to solve existing problems even though there are still many considerations in making decisions.This opportunity given will make employees more respected for being able Back Abscess and Cellulitis due to Multidrug-Resistant Staphylococcus aureus Infection in
2024-02-01T16:39:36.026Z
2024-01-22T00:00:00.000
{ "year": 2024, "sha1": "d64bc593d2f2888e59a3dfed887ff9e11bae6eb1", "oa_license": "CCBYSA", "oa_url": "https://ajesh.ph/index.php/gp/article/download/222/307", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c59b5412d9bda78e35892dd264e4d30cc1b27ef9", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
52224357
pes2o/s2orc
v3-fos-license
Effect of Atomoxetine on Behavior of Outbred Mice in the Enrichment Discrimination Test Treatment of attention deficit hyperactivity disorder with medications is helpful in less than 60% of cases suggesting the necessity of development of novel drugs. The most accepted animal model of the disease is outbred spontaneously hypertensive rat strain. It was recently found in a novel enrichment discrimination test that the rat strain includes attentionally-low and -high phenotypes and clinically efficient drug for the treatment of the disorder atomoxetine is capable of ameliorating the enrichment discrimination by the attentionally-low rats. The present study aimed to test the generality of these findings in outbred CD-1 mice assessed in the same experimental design. The frequency distribution of the enrichment discrimination ratio differed from the curve expected under the normality hypothesis and had a bimodal shape suggesting the existence of attentionally-low and -high mouse phenotypes. Atomoxetine (3 mg/kg, orally, once daily for 4 days) selectively enhanced enrichment discrimination in mice of attentionally-low phenotype only. The present results generalize and extend findings previously reported in spontaneously hypertensive rats and suggest that the present model could be useful in studies of the neurobiological mechanisms of attention deficiency in rodents and for screening of novel drug candidates for treatment of attention deficit disorder. Introduction Attention deficit disorder with or without hyperactivity (ADD/ADHD) is described in approximately 8% -10% of children with greater prevalence in boys than in girls [1].It is usually first diagnosed in childhood and, if untreated, often lasts into adulthood [2].Although the patients have problems in academic and job performance their difficulty in selective attention is unrelated to an individual's overall intelligence and motor skills [1,3,4].Causal mechanisms of ADD/ADHD are still not known.The disease has considerable heritable components revealed by family and twin studies [4,5], albeit environmental factors may also contribute to the disease [6].The symptoms of ADD/ADHD could be alleviated by stimulant medications or some antidepressants, particularly, by novel non-stimulating drug atomoxetine [1,7,8].Nonetheless, the medications are helpful in less than 60% of cases suggesting the necessity of the development of novel drugs for treatment of ADHD [8,9]. Animal models of ADD/ADHD employ rodents of dif-ferent genetic backgrounds [10][11][12][13].Although the most accepted model is outbred spontaneously hypertensive rat strain capable of demonstrating both inattentiveness and hyperactivity, there is a criticism related to dissimilarity of results obtained across different tests [14] and to the fact that, in instrumental paradigms employed, variability in general activity also contribute to ADD/ ADHD-like behavioral pattern [15] suggesting the necessity of independent evaluation of attention, cognitive performance and general activity.It has been reported that the spontaneously hypertensive rat strain include subpopulations of, so-called, impulsive and non-impulsive individuals [16,17].Recently, the rat strain was also found to be nonhomogeneous with regard to attention in paradigm of enrichment discrimination that does not involve rule learning and provides separate measures of attention towards enriching objects (ED-ratio), general locomotor activity and spatial orientation [18].The latter is considered as belonging to cognitive domain that parallel Cattell's general fluid intelligence [19,20] and has been used for assessment of cognition enhancing drugs [21][22][23].The at-tentionally-low and -high subpopulations did not differ from each other in measures of locomotor activity and blood pressure [18].Also, the attentionally-high phenotype, as compared with the attentionally-low one, did not show superiority in ability for spatial orientation.The anti-ADD/ADHD drug atomoxetine was capable of improving attention to the environmental cues in the attentionally-low phenotype.Although the attention-enhancing effect of atomoxetine coincided with a decrease in locomotor activity, it was not accompanied by alteration of spatial orientation. The purpose of this study is to test the generality of the existence of attentionally-low and -high phenotypes among outbred mice, because the behavior of the most accepted models of ADD/ADHD in the species (DAT knockout and Coloboma mutant mice) is primarily characterized by hyperactivity rather than inattentiveness [13].The specific aims of the present study are to evaluate: 1) If frequency distribution of the ED-ratio in non-selected population of outbred mice diverges from the Gaussian distribution; 2) In case of non-homogeneity of the population, if the EDlow and -high mouse phenotypes differ from each other in general locomotor activity and cognitive ability for spatial orientation and if atomoxetine is capable of improving the enrichment discrimination in mice of ED-low phenotype. Animals Eighty male mice of CD-1 strain (body weight 20 -25 g) were purchased by Pushchino animal breeding farm (Moscow region, Russian Federation).The strain was obtained from Charles River Laboratories, USA, in 2001.The animals were kept in standard vivarium conditions with free access to pellets of standard dry chow and sterile drinking water at 12:12 hour light-dark cycle.The care and use of animals and procedures reported in this study were in accordance with the Directive 2010/63/EU of the European Parliament and of the Council on the protection of animals used for scientific purposes. Drugs Atomoxetine (Strattera, Eli Lilly, USA) was dissolved in sterile water containing 0.5% Tween-80 (P1754, Sigma-Aldrich, USA).The drug was administered orally using stainless steel feeding needle in a dose of 3 mg/kg once daily.The vehicle for control animals contained the 0.5% Tween-80 in sterile water.The volume of administration was 2.5 ml/kg. Apparatus The cross-maze was purchased by OpenScience Ltd., Russian Federation (catalog # TS0605-2).The apparatus was made of black plastic and consisted of 4 closed arms (12 × 12 × 12 cm) connected to the same central compartment via rectangular doorways (7 × 7 cm).Two cylindrical glass bottles (4.5 cm in diameter, 4 cm high) served as enriching objects.The bottles were placed in opposite arms.Each of them was mounted vertically near the wall that was distant from the doorway.The maze was covered by transparent plastic lid supplied with small ventilation holes and partition numbers.The arms were numbered in clockwise direction 1, 2, 3 and 4; the central compartment was assigned to number 5. General Procedure On day 1, behavior of all 80 animals was evaluated in the first ED-test.Frequency distribution of the ED-ratio (see sections 2.5 and 2.6) was compared with normal distribution.Because there was difference from the normal curve showing the existence of ED-low and -high phenotypes, mice of both phenotypes were randomly divided into subgroups assigned to administration of either vehicle or atomoxetine (15 animals per group).The mice from vehicle and atomoxetine groups received the corresponding treatment on days 4 -7.On day 7, the animals were subjected to the same second ED-test conducted an hour after last vehicle or atomoxetine administration. Enrichment Discrimination Test The mouse was placed into the central compartment and allowed to explore the maze until 12 visits into arms have occurred with cutoff time of 10 minutes.Each visit was scored after entry into a compartment with all four paws inside.The sequence and timing of arms visited were recorded directly into a personal computer by the use of Behavset 3.0 software.The floor and the objects in arms were cleaned thoroughly with paper towel damped in 70% ethanol and were air-dried after each trial [18].The position of the objects in a pair of opposite arm (#1 and #3, or #2 and #4) was alternated in a quasi-random order. Subsequent analysis was performed with the help of Endisc software detecting the following measures: 1) Total time spent in empty or enriched arms.Using the measure, the ED-ratio that was calculated according to the formula: ED-ratio = 100 × T enriched /T empty where, T enriched is the total time spent in arms containing objects, T empty is the total time spent in empty arms.In case of no difference between time spent in enriched and empty arms, the ratio is equal to 100.Animals exploring the objects typically stay longer in enriched part of maze than in empty arms and have the ED-ratio scores higher than 100.Attendance at the objects area and time spent in the area by an animal exploring novel environment is generally considered as measures of an attention directed to the objects [20,24,25]. 2) Total time in maze until an animal completes 12 visits to arms, i.e. when it returns to central partition from the arm entered on 12th visit.The variable is considered as measure of locomotor activity because it highly negatively correlates with ambulation in the open-field test [26]. 3) Size of first patrolling episode scored as number of entries by an animal into arms until each of four arms has been visited at least ones.For instance, if the sequence of arm entered is 124141334132, then the size of first patrolling episode is 7, because the episode is completed with entry into arm #3 on 7th visit.The more visits a patrolling episode takes, the less efficient is maze exploration.The shortest patrolling episode includes 4 visits.In that case, it is analogous to cognitive behavioral alternation in the Y-maze exploratory test (i.e.visiting of all 3 arms in a row without repetitions) that has been considered as the measure of short-term memory dependent spatial orientation [27,28]. 4) The total number of patrolling episodes made by an animal during the test.In the example above, the measure equals two, because the second patrolling episode is completed on the 12th entry into arm #2.The more patrolling episodes are made during the test, the more efficient is exploration.It was possible for an animal to make maximum 3 patrolling episodes during the test.Both the measures of patrolling behavior are considered to represent the ability for spatial orientation [21,26,29].The cognitive behavior is sensitive to cognition enhancing drugs [21][22][23], ageing [29] and L-glutamate applied in neurotoxic concentration to frontal cortex [30]. Statistical Analysis The Chi-Square test was employed for comparison of Gaussian distribution with frequency distribution of the ED-ratio.The T-tests for independent and dependent samples were used for comparison of measures from ED-low and ED-high phenotypes.A two-way ANOVA with phenotype (ED-low or -high) and drug (vehicle or atomoxetine) as independent variables was performed for estimation of atomoxetine effects on behavior of mice in the second ED-test.Also, difference between pairs of means was evaluated by the use of ANOVA's univariate test of significance for planned comparison.The analysis was made using Statictica 6.0 package. Results The ED-ratio of time spent in enriched and empty arms in the first ED-test had frequency distribution with apparent bimodal shape (Figure 1).The distribution differed significantly from the curve expected under the normality hypothesis (Chi-Square = 12.93, df = 4, p < 0.012) revealing the existence of two phenotypes that diverge in attention to enriched partitions.Because local minimum between the modes was near score 100, the mice with ED-ratio below 100 were accepted as ED-low (mean ± SEM of the ED-ratio = 76.3 ± 2.2), while the rest of animals were considered as ED-high (138.4 ± 3.4, respectively).The ED-low phenotype was present in 45% individuals of unselected population. The ED-low mice spend less time in the enriched than in the empty arms of the maze (t(35) = 6.73, p < 0.001; Table 1).On the contrary, the ED-high mice had preference for enriched partitions (t(43) = 12.01, p < 0.001).The phenotypes did not differ from each other in measures of patrolling behavior and in time spent in the maze until 12 visits into arms have occurred.The variable is employed for division of unselected population into EDlow and -high subpopulations; § § and § § § denote significant difference between ED-low and -high subpopulations revealed by T-test (p < 0.01 and p < 0.001, respectively). In the second ED-test, mice of ED-low and -high phenotypes having been treated with vehicle retained their main characteristics (Table 2).The phenotypes diverged from each other both in time spent in enriched (t(28) = 2.59, p = 0.015) and empty arms (t(28) = 2.54, p = 0.05).The ED-low mice spent less time in enriched arms as compared with empty ones (t(14) = 2.25, p = 0.041), while the ED-high mice displayed preference for enriched partitions (t(14) = 2.22, p = 0.043).Correspondingly, the EDratio for mice of ED-low phenotype was again lower than in ED-high mice (t(28) = 3.43, p = 0.002).Two-way ANOVA of time spent in enriched arms by the ED-low and -high mice having been treated with either atomoxetine or vehicle revealed the only significant effect of drug (F (1,56) = 7.16, p = 0.01; there was a 47% and 22% increase in the measure, correspondingly). On time spent in empty arms, the ANOVA yielded significant effect of drug (F(1,56) = 4.23, p = 0.044) and interaction of phenotype with drug (F(1,56) = 5.74, p = 0.02).Paired comparison revealed that atomoxetine produced a 46% reduction in the measure in mice of ED-low phenotype (F(1,56) = 9.91, p = 0.003).The effect of atomoxetine on time spent in empty arms by the ED-high mice did not reach statistical significance.After atomoxetine, time spent in enriched partitions was greater than that in empty ones in mice both ED-low (t(28) = 3.99, p = 0.001) and -high (t(28) = 2.86, p = 0.013) phenotype. Discussion The present study shows that frequency distribution of ED-ratio of time spent in enriched and empty partitions by mice from non-selected population differs from the curve expected under normality hypothesis and has a bimodal shape.The result reveals the existence, among CD-1 mice, of two subpopulations that diverge in attention to enriched partitions in the maze.Mice from the ED-high phenotype prefer enriched partitions while those of the ED-low one do not have that property.The ED-low mice spent even less time in enriched than in empty arms, probably, because in the former case the floor was partially occupied by enriching objects and there was less space for exploratory ambulation.The divergence between ED-low and -high mice seems to be specific toward attention to environmental cues, because the phenotypes differ neither in spatial orientation (patrolling behavior) nor in locomotor activity.The outcomes of the second ED-test demonstrate the relative stability of the behavioral patterns displayed by the ED-low and -high phenoltypes.Atomoxetine selectively enhances the attentional behavior in individuals of ED-low phenotype only: the drug increases the ED-ratio, however, has no significant effect on both locomotor activity and spatial orientation.The results are in general agreement with those reported in hyperactive DAT knockout and Coloboma mice: there was no significant difference from corresponding genetic controls in measures of spatial orientation in the Y-maze [31,32] and atomoxetine did not reduce ambulatory activity by Coloboma mice in the square arena [33]. The findings of the present study in mice generalize and extend results previously reported in spontaneously hypertensive rats [18].Both the species include individuals of attentionally-low and -high phenotypes.The rodent species only diverge in local minimum between two modes of the ED-ratio distributions (100 in mice vs. 200 in rats).As compared with spontaneously hypertensive rats, the mouse population consequently includes fewer ED-low animals (45% vs. 81% in the rats) and the EDlow mice have slightly lower ED-ratio scores (76.3 ± 2.2 vs. 104.8± 6.2, correspondingly).Both absolute and relative distance between the ED-low and -high phenotypes seems to be larger among the rats (ED-low ratio = 104.8± 6.2, ED-high ratio = 326.7 ± 12.6) than in CD-1 mice (ED-low ratio = 76.3 ± 2.2, ED-high ratio = 138.4± 3.4).In both rodent species, atomoxetine is capable of ameliorating the enrichment discrimination in the ED-low animals.At the same time, atomoxetine does not produce improvement of patrolling behavior that is considered as marker for cognition enhancing drug activity [23,29].On this basis the ED-enhancing effect of the drug cannot be attributed to its potential cognition-enhancing property.It might be interesting in the future to estimate the effects of different medications in the ED-test including those that have been used for treatment of the ADD/ADHD.Because mouse genome is well characterized, already existing [34][35][36] and novel specific gene knockout and other transgenic strains could be used to evaluate mechanisms underlying the attention deficiency revealed in the ED-test. To the best of our knowledge, the enrichment discrimination in rats and mice is the first simple paradigm that could serve as the model of attention deficiency suitable for drug screening and recognition of novel anti-ADD/ADHD drug candidates as well as for variety of neuroscience experiments and translational studies of molecular mechanisms of attention deficiency in rodents. Figure 1 . Figure 1.Frequency distribution of ED-ratio in CD-1 mice evaluated during first ED-test (represented by bars) has bimodal shape and differs significantly from theoretical normal curve (represented by line) (Chi-Square = 12.93, df = 4, p = 0.012). Table 1 . Behavioral measures from the first enrichment dis- crimination test in the ED-low and -high subpopulations of CD -1 mice (mean ± S.E.M.). Table 2 . Behavioral measures (mean ± S.E.M.) from the second ED-test in ED-low CD-1 mice after vehicle or atomoxetine administration (orally, once daily, for 4 days). Vehicle Atomoxetine (3 mg/kg) ED-low ED-high ED-low ED-high ‡ Effect of treatment type (vehicle or atomoxetine) revealed by two-way ANOVA, p < 0.05; ## and ### Difference between atomoxetine and placebo revealed by ANOVA's univariate test for planned comparison, p < 0.01 and p < 0.001; § § and § § § Difference between ED-low and -high phenotypes revealed by ANOVA's univariate test for planned comparison, p < 0.01 and p < 0.001.† , † † and † † † Significant difference revealed by T-test between time spent in enriched and empty arms, p < 0.05, p < 0.01 and p < 0.001.
2018-09-05T17:12:04.710Z
2013-05-20T00:00:00.000
{ "year": 2013, "sha1": "321088270747285f64327da27452ec6a377ba161", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=31825", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "321088270747285f64327da27452ec6a377ba161", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
219095695
pes2o/s2orc
v3-fos-license
Reconstruction of multi‐balloon copying curved surface based on linear distance field function optimisation Aiming at the problems of low precision of three-dimensional (3D) curved surface reconstruction and poor surface smoothness of the reconstruction result, a method based on linear distance field function optimisation is proposed for the reconstruction of multi-balloon copying curved surface. First, curvature weighting is conducted for the curved surface point cloud model constructed with the Poisson's equation (PE), and the boundary value limiting condition of average curvature is used to conduct the smoothness processing of the curved surface; second, due to the dynamic deformation characteristic of multi-balloon copying curved surface, the linear distance field function is used to globally register the non-deformable area and the deformable area, and the distance field gradient is used to estimate the non-deformable area. Moreover, self-adaptive weight optimisation is conducted with the matching confidence level for deformable regions, thus to improve the registration accuracy of 3D point clouds. The simulation results show that the method proposed in this study improves the registration accuracy and surface reconstruction smoothness of standard PE. Introduction The multi-balloon flexible copying robot is a new research topic in the field of flexible robots presently [1][2][3]. The copying quality of the copying robot mainly depends on the precision of the fitting of the reconstruction result of the copying curved surface with the objective model. And the reconstruction result of the general curved surface reconstruction [4][5][6][7] for the dynamic deformation targets is far from satisfactory. The surface reconstruction of the dynamic targets is mainly the result of a dynamic matching of three-dimensional (3D) point clouds. At present, there is some research progress at home and abroad. Rostam et al. [8] proposed a 3D curved surface reconstruction method based on the stereo matching algorithm, which improved the edge matching of the 3D point cloud with the self-adaptive weighting and led the filter to reduce the noises of point cloud through the joint weighting. Hitendra et al. [9] proposed a 3D curved surface reconstruction method based on surface stripes, which applied photometric stereo stripe analysis to the estimation of the roughness of outline curves and therefore made the reconstructed curved surface smoother. Carmelo et al. [10] researched the testing of the boundary points of the 3D point cloud, which applied the polynomial function as the curve fitting method to reduce the impact of noise on the curved surface reconstruction. Ojaswa and Nidhi [11] conducted by the smooth irregular objects, research of the 3D curved surface reconstruction method, and improved the reconstruction precision through the triangular grid interpolation optimisation strategy. Dou et al. [12] performed iterations of the 3D point cloud with the neural network, proposed a deep recurrent neural network algorithm, and constructed a 3D curved surface reconstruction model by the multiviewpoint point cloud. Reyes et al. [13] proposed, by the dynamic deformable objects, the fuzzy control strategy, and conducted a 3D curved surface reconstruction of the dynamic targets with the constant-scale characteristic-transforming strategy. Florian et al. [14] proposed an approach to 3D curved surface reconstruction by the sparse point cloud, which models the front surface with the statistical form model and conducts the fitting of the curved surface with the mixed Gaussian model. Alma et al. [15] proposed an approach to 3D curved surface reconstruction based on the radialbased RBF neural network and the particle swarm, which reduces the operations of the 3D point cloud through the methods of pattern recognition and environment mapping. Jules et al. [16] proposed an approach to 3D curved surface reconstruction based on the Poisson equation (PE), which constructs the gradient vector field with the implicit zero-level function and closes the scattered point cloud into a closed model. Yukie et al. [17] proposed, through the analysis of the surface grid, to conduct calculations by transforming the surface grids into the equivalent surface, and to improve the robustness of the reconstruction through the randomly selected projection directions. Jung et al. [18] studied the finiteelement grid model, conducts the gridding division for the curvature information with the K-means cluster algorithm, and performs the curved surface transformation with the B-spline. Although the above methods can effectively register and reconstruct the curved surface of the 3D point cloud, there are still problems with them, such as low registration accuracy and poor smoothness of the reconstructed surface and so on. The remaining of this paper is organised as follows: Section 2 constructs a surface reconstruction model based on PE; Section 3 proposes a curvatureweighted surface smoothing method; Section 4 uses a linear distance field function to optimise the registration accuracy of the 3D point cloud; Section 5 conducts simulation tests on the improved algorithm. Curved surface reconstruction based on PE The PE [19] is a common method of 3D curved surface reconstruction, whose essence is to calculate the 3D point cloud by solving the function, extract the equivalent surface of the 3D point cloud, and, based on this, splice the equivalent surface to complete the reconstruction of the 3D curved surface. The flowchart is shown in Fig. 1. First, the octree topology model is constructed with the target point cloud [20], and the assignment function F a of every node a of the octree topology model is shown in the following equation: The am is the central point of the octree node a, and ad is the distance of the octree node a. And then, linear interpolation is conducted with the eight neighbouring nodes, and the gradient function represents the The L(n) is the eight neighbouring nodes of the node n, {λ a, n } is the linear interpolation weight factor, S is the objective point cloud, and n ⋅ U is the normal vector of the vertex. Then the simplified equation is established as Solving the minimum function η of (3) yields the equivalent surface, and then the equivalent surface is spliced to complete the 3D curved surface reconstruction. Let's take the Rabbit point cloud data set by Stanford University as an example. Simulation experiments with the above model yield the point cloud registration accuracy shown in Fig. 2 and the curved surface reconstruction result shown in Fig. 3. As shown in Figs. 2 and 3, the precision of the curved surface reconstruction method based on the PE is reduced, and the surface smoothness of the reconstruction result is poorer. Curved surface smoothing based on curvature weighting To make the reconstructed surface smoother, this paper optimises it with the curvature-weighted approach. The flowchart is shown in Fig. 4. Assume the open-surface function of the curved surface point cloud model constructed based on PE is μ(x, y), and its continuous condition is as shown in the following equation: where ρ is a weight factor. As shown in (4), the curved surface constructed by point (x, y, μ(x, y)) meets the smoothness conditions, and its average curvature is f (x, y). Assume that the surface of the surface point cloud model constructed based on the PE cannot meet the above conditions. Then set the initial value as the z-coordinate of the matrix mesh node, and the μ(x, y) of every node is iterated until the above conditions are met. The average curvature iteration equation of the non-smooth curved surface is shown in the following equation: s x is the step in the direction of x, and s y is the step in the direction of y. Since there is no boundary set in the above iteration process, it is easy to lead to changes in its original form. So, the original point cloud is classified with the minimum absolute value of the average curvature at the extreme points as the boundary value, as shown in the following equation: The region that is larger than the boundary value belongs to the deformation region, and is eliminated out of the iteration process, thus ensuring the correctness of the original form. Simulation tests are conducted with the above-curved surface smoothing approach, as shown in Figs. 5 and 6. As can be seen from Figs. 5 and 6, the curved surface smoothing based on the curvature weighting has a better effect. Precision optimisation based on linear distance field function In accordance with the problem of reducing the accuracy of the surface reconstruction method based on PE, this paper optimises it with the linear distance field function and registers it through the linear distance field function globally. Suppose that the set of the point cloud is S, then the shortest distance from a point a in the point cloud set to the very point is shown by the following equation: The a is the midpoint of the point cloud set, and p is the closest point of midpoint a. In order to distinguish that the point is within the boundary ∂U, linearisation is conducted with the positive and negative signs added. If the point is within the boundary ∂U, the sign is defined as negative, and vice versa, as shown in the following equation: If a ∈ U, then sgn(a) = − 1; otherwise, sgn(a) = 1. Global registration is conducted for the 3D point cloud model through the above linear distance field function and is classified into global registration of the non-deformable regions and global registration of the deformable regions in accordance with the characteristics of the multi-balloon copying curved surface. Global registration of the non-deformable regions Divide the entire point cloud set into uniform sub-blocks with every sub-block O i containing the same number of distance field sampling points. The error of the difference between the distance field sampling point and the initial distance field value is calculated by the following equation: where ϕ M is the distance field of the sampling point, ϕ N is the initial distance field, p j is the sampling point of the block, w i is the transform function, and β is the deformation parameter. Performing the global registry for every sub-block O i yields the parameter increment Δw i of the non-deformable regions Linearising equation (10) and calculating the distance field gradient ∇ϕ N with the Sobel operator yields the following equation: H i is the Gauss-Newton approximation of the Hessian matrix, as shown in the following equation: Update the transform function w i with the non-deformable region parameter increment Δw i until it converges. Global registration of the deformable regions In accordance with the deformable regions of the multi-balloon copying curved surface, define all the sampling points p in the two non-deformable blocks O i and O j , and perform transform iterations β i and β j . The difference value after iteration is as shown in the following equation: To keep the copied curved surface after iterations smooth, find the weighted sum of the local smoothing items, as shown in the following equation: In view that after the balloon is deformed, the point cloud matching degree of the deformable region's changes. So, to increase the matching precision, introduce the matching confidence level E v , as shown in the following equation: The γ i is the self-adaptive weight, whose value can be found through the following equation: In (15), e i loc is the matching error, e max is the maximum matching error, and σ is the adjustment factor to avoid that the blocks with large errors are not added into the iteration. Registration precision simulation testing To test the effectiveness of the algorithm, take the Rabbit point cloud dataset as an example. If the original registration data is shown in Fig. 7, the feature points extracted by the iterated partial evaluation (IPE) algorithm in this paper are shown in Fig. 8, and the point cloud registration results are shown in Fig. 9. Select a different number of characteristic points, compare and analyse the iterative closest point (ICP) algorithm, standard PE and the approach in this paper (IPE). And the results are shown in Figs. 10-13 and Table 1. As can be seen from the results of Figs. 10-13 and Table 1, with the increase of the number of the characteristic points, the matching precision of the ICP algorithm and the standard PE is becoming increasingly lower, while the approach proposed in this paper can ensure certain matching precision. Simulation of the balloon curved surface reconstruction In accordance with the multi-balloon copying curved surface reconstruction, perform curved surface reconstruction for the balloon in different air-filling states. The matching precision of the balloon state testing ICP algorithm, standard PE and the approach of this paper (IPE) at five different time points are shown in Figs. 14-18. As can be seen from the simulation results shown in Figs. 11-15, the approach presented in this paper can yield higher registration precision. Take the balloon at the instant of t5 as an example. The curved surface reconstruction results of the three approaches are shown in Figs. 19-21. As can be seen from the results in Figs. 19-21, the approach proposed in this paper has a better smoothness of curved surface construction. Conclusions The research on the copying curved surface reconstruction of the multi-balloon flexible robot helps to simulate the curved surface of the copying object more precisely, making the flexible robot applicable to wider areas. In this paper, a method of 3D surface reconstruction based on linear distance field function optimisation is proposed. Based on the standard PE, the non-deformable region and deformable region of the 3D surface are optimised, respectively, which improves the registration accuracy and the smoothness of surface reconstruction of standard PE. Although the improved algorithm in this paper has some advantages, its operation time is longer than the standard algorithm, so that it will be optimised and improved in future work.
2020-04-30T09:11:14.051Z
2020-04-22T00:00:00.000
{ "year": 2020, "sha1": "37ba556ce71e4e7011857c1b4527d2e09bf8db91", "oa_license": "CCBYNC", "oa_url": "https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/iet-cim.2019.0040", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fda05c45ea8397643dc67bb03be4f5adb3f877b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
53587051
pes2o/s2orc
v3-fos-license
On Linking of Task Analysis in the HRA Procedure : The Case of HRA in Offshore Drilling Activities Human reliability analysis (HRA) has become an increasingly important element in many industries for the purpose of risk management and major accident prevention; for example, recently to perform and maintain probabilistic risk assessments of offshore drilling activities, where human reliability plays a vital role. HRA experience studies, however, continue to warn about potential serious quality assurance issues associated with HRA methods, such as too much variability in comparable analysis results between analysts. A literature review highlights that this lack of HRA consistency can be traced in part to the HRA procedure and a lack of explicit application of task analysis relevant to a wide set of activity task requirements. As such, the need for early identification of and consistent focus on important human performance factors among analysts may suffer, and consequently, so does the ability to achieve continuous enhancements of the safety level related to offshore drilling activities. In this article, we propose a method that clarifies a drilling HRA procedure. More precisely, this article presents a novel method for the explicit integration of a generic task analysis framework into the probabilistic basis of a drilling HRA method. The method is developed and demonstrated under specific considerations of multidisciplinary task and well safety analysis, using well accident data, an HRA causal model, and principles of barrier management in offshore regulations to secure an acceptable risk level in the activities from its application. Introduction Human reliability analysis (HRA) is becoming increasingly important as a tool for risk control in activities that have catastrophic potential, such as nuclear power generation and offshore drilling.The main purpose of HRA of activities is to identify and evaluate the key human behaviour-oriented risk factors that concern major accident prevention for any operator-intensive system under different operational modes.An offshore operating company may typically employ HRA during the planning and follow-up of drilling activities to control the blowout risk associated with interactions among service providers [1].In this case, HRA could be considered critical to assist an operator to maintain two barriers during drilling operations [2], and thereby to provide an acceptable level of safety as stipulated by society [3].As an example, there are requirements for the driller to manually activate the blowout preventer (BOP), a main well safety barrier, during operations.The need to activate the BOP may occur relatively often, according to data [4].Therefore, HRA helps to identify and evaluate the influences of human and organisational factors in drilling that nowadays may be considered a prerequisite to risk management.This article comprises the last part in a trilogy [5] that proposes a new method for probabilistic risk assessment of offshore drilling activities [1].This final part proposes that further improvements could be made to complete the procedure method; namely, for the procedure to explicitly describe the link in a HRA causal model to the performance of generic task analysis, since every well design is unique from Mother Nature's side.As such, the objective of this procedure enhancement is to include an explicit link between the collective term of task analysis and HRA method to reduce the tendency for analyst-to-analyst variability, which remains a potential prevailing quality assurance issue in HRA [6][7][8][9][10]. HRA critique points to several factors that may help compromise HRA quality, which are also associated with task analysis and procedure.For example, NUREG-1792 [6] describes many HRA methods as merely quantification methods that need to be tailored to specific activity requirements.Even this may not be straightforward, since task requirements vary between different industries and workplace conditions [9].Notably, different requirements can also be found within an industry, such as the risk assessment performed on the installation level versus the well system level [5]. The literature also includes discussions related to: (i) adopting knowledge about human behaviour that may be outdated or only applicable to simple tasks; (ii) the 'black box' nature of many causal models that make validation difficult; (iii) use of terminology not particularly suited for proactive human failure analysis [9,[11][12][13].Issues related to terminology may presumably also have links to the many knowledge domains found commingled in HRA methods, notably different human factor concepts in methods such as: (i) organisational and normal (sociotechnical) accidents [14,15]; (ii) heuristics and biases [16]; (iii) perceptual cycle and sensemaking [10,17]; and (iv) situation awareness [18,19]. Table 1 summarizes the literature relevant to categorical task analysis and HRA in the oil and gas industry.As shown, the literature may be classified with different causality focuses that, in turn, are organised in influence structures of one to four levels in total.The most popular framework today in task analysis, with adaptations also for oil and gas, are the human factor analysis and classification system (HFACS) [20,21], which is based on the energy defence model ( [15], Figure 1 and Figure 6).HFACS is found adapted and demonstrated for several applications in the literature, among others, in oil refinery accident investigations [22].HFACS represents a further development of Reason's energy defence hierarchical causal classification scheme that also is adopted in the drilling HRA method [1].Whereas HFACS also considers that preconditions for unsafe act as an extra level within the hierarchy, the drilling HRA includes a separate checklist developed with elements from social and cognitive psychology. Interestingly, a keyword search in the Table 1 literature produced limited explicit discussion relevant to important offshore barrier management and failure analysis concepts such as performance influences and performance requirements.For example, in the Norwegian oil and gas industry, the safety authorities emphasise the explicit need for definition of the human, organizational, or technical barrier elements put in place to realise a main safety function in oil and gas activities [2].The guideline suggests definitions in risk analysis based on a hierarchical breakdown as follows: (i) Main barrier function and subfunctions, which describe what is to be achieved by the barrier.(ii) Barrier elements, which describe equipment, personnel, and operations that are necessary to achieve the functions.(iii) Performance requirements, which describe measureable requirements about element properties.(iv) Performance-influencing factors (PIF), which describe identified conditions that may impair the ability of elements to perform as intended. The literature review suggests three main practical requirements towards an approach to create a better link between task analyses, i.e., categorical human error analysis, and HRA, i.e., human error probability calculations, as follows: • Multidisciplinary.Relevant across popular human factors and engineering domains that study technical, organizational, and human factors in safety management. • Generic.Relevant across process control technologies and human behavioural constructs with levels for describing human task performance, i.e., relevant to both generic task analysis and to models of causality adopted in the quantification of human error probabilities in HRA. • Compliant.Relevant to governing barrier management principles in offshore regulations.An example is the Petroleum Safety Authority Norway (PSA) guideline to barrier management in the Norwegian offshore industry [2]. This article describes research performed to address the quality assurance issues in drilling HRA that may result from poor integration of task analysis in the drilling HRA procedure.The objective of this research is to improve well system safety through the consistent performance of HRA in probabilistic risk assessments of offshore drilling activities. The structure of the article is as follows: Section 2 describes the approach developed, which includes selected steps in the procedure for the offshore drilling HRA method proposed [1].The approach includes clarifications and modifications made to a generic hierarchical task analysis (HTA) framework relevant to the categorical evaluation of human task performance requirements in the HRA procedure.In Section 3, a drilling crew training scenario is used as a case study to realistically demonstrate and discuss an application of the approach.Finally, Section 4 includes concluding remarks from the research and suggestions for further work. Proposed Task Analysis Method in HRA This article represents the completion of previous work related to developing an explicit integration of generic task analysis within the procedure of the drilling probabilistic risk assessment (DPRA) method, which is proposed for risk control during offshore drilling activities [5].The boxes shown with greyscale in Figure 1 illustrate the focus of the research presented in this article in the context of the DPRA method procedure [19].From Figure 1, the task analysis follows a task screening process that identifies critical tasks to be analysed, and where the task analysis results are to be further used to update the DPRA causal model [1,19].The adaptations are based on recognized concepts: (i) hierarchical task analysis (HTA) [39]; (ii) the structured analysis and design technique (SADT) [40] and basic concepts of failure analysis [41]; and (iii) quality function deployment (QFD) [42] and the analytical hierarchy process (AHP) [43].A description of the key elements in the approach follows in the next sections.  Generic.Relevant across process control technologies and human behavioural constructs with levels for describing human task performance, i.e., relevant to both generic task analysis and to models of causality adopted in the quantification of human error probabilities in HRA.  Compliant.Relevant to governing barrier management principles in offshore regulations.An example is the Petroleum Safety Authority Norway (PSA) guideline to barrier management in the Norwegian offshore industry [2]. This article describes research performed to address the quality assurance issues in drilling HRA that may result from poor integration of task analysis in the drilling HRA procedure.The objective of this research is to improve well system safety through the consistent performance of HRA in probabilistic risk assessments of offshore drilling activities. The structure of the article is as follows: Section 2 describes the approach developed, which includes selected steps in the procedure for the offshore drilling HRA method proposed [1].The approach includes clarifications and modifications made to a generic hierarchical task analysis (HTA) framework relevant to the categorical evaluation of human task performance requirements in the HRA procedure.In Section 3, a drilling crew training scenario is used as a case study to realistically demonstrate and discuss an application of the approach.Finally, Section 4 includes concluding remarks from the research and suggestions for further work. Proposed Task Analysis Method in HRA This article represents the completion of previous work related to developing an explicit integration of generic task analysis within the procedure of the drilling probabilistic risk assessment (DPRA) method, which is proposed for risk control during offshore drilling activities [5].The boxes shown with greyscale in Figure 1 illustrate the focus of the research presented in this article in the context of the DPRA method procedure [19].From Figure 1, the task analysis follows a task screening process that identifies critical tasks to be analysed, and where the task analysis results are to be further used to update the DPRA causal model [1,19].The adaptations are based on recognized concepts: (i) hierarchical task analysis (HTA) [39]; (ii) the structured analysis and design technique (SADT) [40] and basic concepts of failure analysis [41]; and (iii) quality function deployment (QFD) [42] and the analytical hierarchy process (AHP) [43].A description of the key elements in the approach follows in the next sections. Terminology in Task Analysis A crisp definition of key concepts is crucial to the quality of any multidisciplinary risk analysis.This section introduces the main concepts for task analysis based on the article literature review and previous work on the integration of engineering failure and risk analysis with traditional human factor task analysis [5]. Task analysis may be defined as an analysis of human performance requirements, which if not accomplished in accordance with system requirements, may have adverse effects on system cost, reliability, efficiency, effectiveness, or safety ( [44], p. 1).Task analysis aims to describe the manual and mental processes required for one or more operators to perform a required task [45].The analysis typically results in a hierarchical representation of the steps required to perform a main task for which there is a desired outcome(s) and for which there is some lowest-level action, or interaction, between humans and machines, denoted as the human-machine interface (HMI). Human (operator) error probability (HEP) and human failure events (HFE) are the main concepts in HRA, which generally refer to basic events in bowtie risk analysis.For example, NUREG/CR-6883 ( [7], p. 27), similarly to NUREG/CR-6350 ([29], p. 2-10), states that "HEP is the probability of the HFE", where HFE is defined as "a basic event that represents a failure or unavailability of a component, system, or function that is caused by human inaction or an inappropriate action".Table 2 summarizes terms relevant to task analysis for offshore drilling activity. Table 2. Terminology relevant to task analysis of offshore drilling activity (adapted from [5]). Term Definition Human failure event A collective term for an event that represents a failure or unavailability of a component, system, or function that is attributed to human inaction or an inappropriate action. Note: A human failure event may include many operator errors consolidated as a defined scenario. Operator error Failure of operator to act according to stated performance requirement(s). Note 1: Operator errors are associated with normative human (individual or team) behaviour or unsafe acts, which are not intended or not desired. Note 2: Operator errors are associated with a predefined level of departure accepted as conclusive evidence.Departure denotes a discrepancy between a computed, observed, or measured operator performance and the target stated in the performance requirement standard. Operator performance requirement A stated need or expectation about operator performance considered necessary in order to accomplish a given task objective. Note: Operator performance requirements may: (i) be an expectation implied from the human-machine interface; (ii) by implication, also cover what the operator should not do. Operator error mode The manner of nonconformity in which operator error occurs. Note: Conformity means that specified requirements relating to product, process, system, person, or body are fulfilled by demonstration. Operator error cause A set of circumstances that impairs recovery from undesired effects of operator behaviour. Operator performance influence A process of departure or recovery described by workplace conditions and latent human error tendencies. Note: The consolidated description of the departure process in human performance analysis may typically include a set of performance-influencing factors. Operator error effect Observable consequence of operator error, within or beyond the boundary of a sociotechnical system entity. Note: Effects of operator actions may be associated with short-term effects on task objectives.For example, an action may be categorised as: (i) recovery; (ii) departure; or (iii) indifferent. HTA in Task Analysis HTA is a popular task analysis technique that is considered a central approach in ergonomic studies [39].As illustrated in Figure 2, the HTA produces a description of tasks in a hierarchy, beginning with a task at the highest level consisting of objectives expressed by the goals of the sociotechnical system, which in turn are decomposed into operation subobjectives and lower-level actions [39].Actions are defined as the smallest individual specific operation carried out by operators interacting with a technical system or by the system itself, and are often procedural in nature, with an implied or explicit intended sequence. Safety 2018, 4, x 6 of 20 Note: Effects of operator actions may be associated with short-term effects on task objectives.For example, an action may be categorised as: (i) recovery; (ii) departure; or (iii) indifferent. HTA in Task Analysis HTA is a popular task analysis technique that is considered a central approach in ergonomic studies [39].As illustrated in Figure 2, the HTA produces a description of tasks in a hierarchy, beginning with a task at the highest level consisting of objectives expressed by the goals of the sociotechnical system, which in turn are decomposed into operation subobjectives and lower-level actions [39].Actions are defined as the smallest individual specific operation carried out by operators interacting with a technical system or by the system itself, and are often procedural in nature, with an implied or explicit intended sequence. SADT in Task Analysis SADT is a popular failure analysis technique that, similarly to HTA, describes technical function objectives at different system breakdown levels.However, the function requirements in SADT are depicted as process blocks, with arrows that describe function level inputs and outputs, as shown in Figure 3 [40].Input takes the form of the basic energy, materials, and information required to perform the function.Control elements govern or constrain how the function is performed.Mechanism or environment refers to the people, facilities, and equipment necessary to carry out the function. SADT in Task Analysis SADT is a popular failure analysis technique that, similarly to HTA, describes technical function objectives at different system breakdown levels.However, the function requirements in SADT are depicted as process blocks, with arrows that describe function level inputs and outputs, as shown in Figure 3 [40].Input takes the form of the basic energy, materials, and information required to perform the function.Control elements govern or constrain how the function is performed.Mechanism or environment refers to the people, facilities, and equipment necessary to carry out the function. Safety 2018, 4, x 6 of 20 Note: Effects of operator actions may be associated with short-term effects on task objectives.For example, an action may be categorised as: (i) recovery; (ii) departure; or (iii) indifferent. HTA in Task Analysis HTA is a popular task analysis technique that is considered a central approach in ergonomic studies [39].As illustrated in Figure 2, the HTA produces a description of tasks in a hierarchy, beginning with a task at the highest level consisting of objectives expressed by the goals of the sociotechnical system, which in turn are decomposed into operation subobjectives and lower-level actions [39].Actions are defined as the smallest individual specific operation carried out by operators interacting with a technical system or by the system itself, and are often procedural in nature, with an implied or explicit intended sequence. SADT in Task Analysis SADT is a popular failure analysis technique that, similarly to HTA, describes technical function objectives at different system breakdown levels.However, the function requirements in SADT are depicted as process blocks, with arrows that describe function level inputs and outputs, as shown in Figure 3 [40].Input takes the form of the basic energy, materials, and information required to perform the function.Control elements govern or constrain how the function is performed.Mechanism or environment refers to the people, facilities, and equipment necessary to carry out the function.The HTA in Figure 2 describes three task breakdown levels with parallels in failure analysis [41]: (i) system; (ii) items; and (iii) components.With the structural similarity in mind, we develop the HTA Safety 2018, 4, 39 7 of 20 further by adopting concepts from SADT [40] and functional block diagrams [41].With consideration of the DPRA causal model [1,19], we consider the following HTA-SADT diagram definitions: • Task, operation, and action objectives as 'functions' stated in the block. • Performance requirement standards serve as the 'control system'. • Situational elements provide the 'inputs', which may be described in terms of operator perception and focus of attention; for example, a process of hearing, seeing, smelling, tasting, and feeling the vicinity at the action level, and on a higher level as objects, events, people, systems, and environmental factors associated with goals [46]. • Results from the performance of tasks, operations, and actions are the 'outputs'. • PIFs provide the supporting 'environment and mechanisms'. To maintain three levels of coherence in analysis, it is advised to follow the documentation from performance requirement standards identified at action-level plans and procedures, tracing upwards in the organisation via relevant work process objectives.As such, the result from the combination of HTA and SADT is a bottom-up approach to task analysis. In failure analysis, we assign criticality classifications to actions in the task analysis to help prioritise further efforts according to the matrix shown in Figure 4.For example, monitoring of changes in mud pit levels during drilling is viewed as an essential action in well kick detection.Actions may also be viewed as auxiliary, i.e., introduced in support of essential actions.Examples of auxiliary actions in drilling are typically actions performed to reduce the risk of drilling process upsets, such as stuck pipe incidents.A planned drilling operation may also conceivably include superfluous actions that are actions not required for successful task completion.Superfluous actions are undesired, since they may create a high noise-to-signal ratio [14].For the purpose of the HRA matrix in Figure 4, we also classify the degree of mental and physical effort involved for the operator or crew to perform actions based on popular levels of human behaviour [47].Indicated on Figure 4 The HTA in Figure 2 describes three task breakdown levels with parallels in failure analysis [41]: (i) system; (ii) items; and (iii) components.With the structural similarity in mind, we develop the HTA further by adopting concepts from SADT [40] and functional block diagrams [41].With consideration of the DPRA causal model [1,19], we consider the following HTA-SADT diagram definitions:  Task, operation, and action objectives as 'functions' stated in the block.  Performance requirement standards serve as the 'control system'.  Situational elements provide the 'inputs', which may be described in terms of operator perception and focus of attention; for example, a process of hearing, seeing, smelling, tasting, and feeling the vicinity at the action level, and on a higher level as objects, events, people, systems, and environmental factors associated with goals [46].  Results from the performance of tasks, operations, and actions are the 'outputs'.  PIFs provide the supporting 'environment and mechanisms'. To maintain three levels of coherence in analysis, it is advised to follow the documentation from performance requirement standards identified at action-level plans and procedures, tracing upwards in the organisation via relevant work process objectives.As such, the result from the combination of HTA and SADT is a bottom-up approach to task analysis. In failure analysis, we assign criticality classifications to actions in the task analysis to help prioritise further efforts according to the matrix shown in Figure 4.For example, monitoring of changes in mud pit levels during drilling is viewed as an essential action in well kick detection.Actions may also be viewed as auxiliary, i.e., introduced in support of essential actions.Examples of auxiliary actions in drilling are typically actions performed to reduce the risk of drilling process upsets, such as stuck pipe incidents.A planned drilling operation may also conceivably include superfluous actions that are actions not required for successful task completion.Superfluous actions are undesired, since they may create a high noise-to-signal ratio [14].For the purpose of the HRA matrix in Figure 4, we also classify the degree of mental and physical effort involved for the operator or crew to perform actions based on popular levels of human behaviour [47].Indicated on Figure 4 are the scores assigned to each class (upper right-hand box) and tag numbers relevant to classifications made of the actions considered in the case study in the next section. Causality Classifications in Task Analysis Figure 5 illustrates the causal classification scheme used for the task analysis.As can be seen, operator error mechanisms are divided into individual, workplace, and organisational PIFs.The PIFs are also associated with other cause categories, shown in boxes with dashed lines below.The scheme reflects operator error as a process of departure that follows as a result of natural exploratory behaviour [47], where PIFs describe an error-forcing context [29] as encountered in a situation with a set of circumstances where workplace factors and latent human error tendencies may easily combine and result in operator error [19]. are also associated with other cause categories, shown in boxes with dashed lines below.The scheme reflects operator error as a process of departure that follows as a result of natural exploratory behaviour [47], where PIFs describe an error-forcing context [29] as encountered in a situation with a set of circumstances where workplace factors and latent human error tendencies may easily combine and result in operator error [19]. The categories derive from HFACS and DPRA, which both adopt Reason's hierarchical energy defence model.The combination in Figure 5 of preconditions from HFACS with existing individual factors defined in DPRA could lead to the introduction of ambiguous terms.We therefore consider the preconditions from HFACS strictly as non-workplace-related error tendencies in the task analysis.For the purpose of validation, the causal classification scheme has been applied to four well accident sequence descriptions provided in previous work [19].The results from this exercise are shown in Table 3.The data sources are publicly available reports from well accident investigations.The authors faced challenges in classifying or quantifying explicit contributions from individual causal factors that were documented with limited details.The categories derive from HFACS and DPRA, which both adopt Reason's hierarchical energy defence model.The combination in Figure 5 of preconditions from HFACS with existing individual factors defined in DPRA could lead to the introduction of ambiguous terms.We therefore consider the preconditions from HFACS strictly as non-workplace-related error tendencies in the task analysis. For the purpose of validation, the causal classification scheme has been applied to four well accident sequence descriptions provided in previous work [19].The results from this exercise are shown in Table 3.The data sources are publicly available reports from well accident investigations.The authors faced challenges in classifying or quantifying explicit contributions from individual causal factors that were documented with limited details.The Snorre accident may be described as the result of deficient competence, oversight, and information: First, a mistake made by the crew and supervisors in accepting the plan to use the outer casing and openhole as the main barrier.Next, a lack of recovery caused by not noticing the situation and not maintaining the mandatory two well barriers.The Montara accident may be described as the result of deficient governance, competence, oversight, and information: First, a mistake made by the crew and supervisors in agreeing to move the rig (main barrier) from the well without compensation, presumably motivated in part by cost-saving.Next, a lack of recovery caused by not noticing the situation and not maintaining the mandatory two main barriers.The Macondo accident may be described as the result of deficient governance, competence, oversight, and information: First, a mistake/violation made by the crew and supervisors who accepted an inconclusive barrier verification test.Next, a lack of recovery caused by not noticing the situation and not maintaining the mandatory two well barriers.The Gullfaks accident is complex, but may be described as the result of deficient competence and oversight associated with the application of a new technology.First, a mistake/violation made by the crew and supervisors who accepted a revision of the drilling program without formal change management.The intention, presumably, was to follow recognised practices established with older technology, without considering the subtle implications of decisions affecting risk factors such as casing design, casing wear, casing stress, and wellbore stability. Apply QFD in Task Analysis In this section, we apply a familiar formal approach to the task analysis as part of updating the drilling HRA causal model.The approach is based on QFD [42], which is used as a means for generating normalised weights, w j , of operational-level PIFs, denoted RIF I s, in the HRA [1].The QFD concept, with its application of "quality houses", includes well-known methods and techniques for stakeholder preference elicitation and evaluation in product or process development ([42], Annex A).For example, evaluations may concern relationships between action performance requirements and action error causes, shown with quality house number one to the left in Figure 6. Figure 6 illustrates the QFD approach with use of two quality houses that result in an evaluation of priority weights, w I I j , which corresponds to an evaluation of operation-level PIFs in HTA and HRA.Respectively, these PIFs are recognised as workplace influences in generic causal scheme shown in Figure 5 (see also Table 3). Safety 2018, 4, x 10 of 20 The Snorre accident may be described as the result of deficient competence, oversight, and information: First, a mistake made by the crew and supervisors in accepting the plan to use the outer casing and openhole as the main barrier.Next, a lack of recovery caused by not noticing the situation and not maintaining the mandatory two well barriers.The Montara accident may be described as the result of deficient governance, competence, oversight, and information: First, a mistake made by the crew and supervisors in agreeing to move the rig (main barrier) from the well without compensation, presumably motivated in part by cost-saving.Next, a lack of recovery caused by not noticing the situation and not maintaining the mandatory two main barriers.The Macondo accident may be described as the result of deficient governance, competence, oversight, and information: First, a mistake/violation made by the crew and supervisors who accepted an inconclusive barrier verification test.Next, a lack of recovery caused by not noticing the situation and not maintaining the mandatory two well barriers.The Gullfaks accident is complex, but may be described as the result of deficient competence and oversight associated with the application of a new technology.First, a mistake/violation made by the crew and supervisors who accepted a revision of the drilling program without formal change management.The intention, presumably, was to follow recognised practices established with older technology, without considering the subtle implications of decisions affecting risk factors such as casing design, casing wear, casing stress, and wellbore stability. Apply QFD in Task Analysis In this section, we apply a familiar formal approach to the task analysis as part of updating the drilling HRA causal model.The approach is based on QFD [42], which is used as a means for generating normalised weights, j w , of operational-level PIFs, denoted RIF I s, in the HRA [1].The QFD concept, with its application of "quality houses", includes well-known methods and techniques for stakeholder preference elicitation and evaluation in product or process development ([42], Annex A).For example, evaluations may concern relationships between action performance requirements and action error causes, shown with quality house number one to the left in Figure 6. Figure 6 illustrates the QFD approach with use of two quality houses that result in an evaluation of priority weights, II j w , which corresponds to an evaluation of operation-level PIFs in HTA and HRA.Respectively, these PIFs are recognised as workplace influences in generic causal scheme shown in Figure 5 (see also Table 3).The proposed QFD-based approach consists of two main stages, described respectively by house of quality (HoQ) number one and two in Figure 6.The first stage covers an evaluation made of action performance requirements versus action error causes identified in the activity HEP/HFE.Next, the action error causes with normalised weights produced in the first stage are reapplied in evaluation of the same action error causes versus relevant operation-level PIFs in the HRA for the same activity.The resulting normalised weights are used directly as updated weights for PIFs in the HRA causal model. The HoQ 1 is seen to include a roof (correlation matrix) that facilitates the orthogonal treatment of the action-level causes, which similarly are handled by the existing HRA procedure on the operational level of HoQ 2. The HoQ 1 correlation matrix is resolved in the approach with the use of AHP.The action-level causes are treated in AHP as three independent subgroups in the approach to reduce the efforts required for achieving consistent pairwise comparisons.The subgroups are defined according to the classification given for causes under individual influences in Figure 5, and are represented with submatrices C 1 , C 2 , and C 3 .In practice, the is carried out for an activity according to the following procedure: 1. Definition of the list of actions in (1, . . ., m).Assign each with a priority score, p I i , by adopting the critical importance score assigned in task analysis (Figure 4); i.e., scores are in (1, 3, 5, 7). 2. Evaluate correlation matrices C 1 , C 2 , and C 3 .Use AHP to determine the normalised weights of causes defined in each subgroup, w C1 k , w C2 k , and w C3 k .Evaluate the correlations with scores in (1-weak, 3-moderate, 5-strong).Check that the consistency ratio becomes less than 0.1 to validate judgments made [43]. 3. Evaluate the relationship matrix R 1 to determine the normalised priority weight of each subgroup matrix C 1 , C 2 , and C 3 .The relationship between the submatrices and actions is quantified using scores, s I ij , in (1-weak, 3-moderate, 5-strong).The subgroup priority weight is defined as Define priority scores to the action error causes transferred to HoQ 2. The updated weights from previous step 4 are here reused as priority scores, p I I i , in the listing.6. Evaluate the relationship matrix R 2 to determine the normalised priority weight of each operational-level PIF in the activity given in (1, . . ., n).The priority weight for PIF j is defined as A search made of the internet and Scopus indicates that there are few explicit associations made between QFD and HRA in the literature.However, the use of QFD is not new to safety analysis.For example, several basic applications of QFD are found proposed in reliability engineering [49] and to evaluate hazards within occupational safety analysis [49][50][51][52].The safety analysis literature also includes a more complicated adaption of QFD, with the use of fuzzy set theory to describe uncertainties related to the elicitations and evaluations performed [53].The implementation of fuzzy set theory or similar to augment uncertainties may also be attractive for further work; for example, the use of triangle-, trapezoid-, or bell-shaped fuzzy numbers may typically be investigated for the various linguistic evaluations.Alternatively, as a first modification to procedure Step 3 and Step 6, we may simply consider that a priority score defines the probability distribution for the random variable S ij .Let p(s)= Pr(S 1 = s 1 , . . ., S n = s n ) represent the joint probability distribution function for n column entries.The updated impacts of the scores on priority weights can then be calculated numerically as where ∑ ∀s . . .denotes the sum over all possible values of the vector s.For example, Table 4 simply treats HoQ 1 relationship scores used in the case study in the next section as being representative for independent triangle distributions, defined respectively with: score is 1 → (1, 1, 3); score is 3 → (1, 3, 5); and score is 5 → (3, 5, 5); where (.,.,.) denotes the minimum, peak, and maximum triangle values. The HoQ approach provides a systematic means for the orthogonal evaluation of PIFs within and between causal levels for the of HRA.However, the potential reliance on the anchored judgment and intuition of single individuals in AHP should be avoided [1].For example, the Delphi method may be adopted to combine results from multiple expert elicitations [54].The list of action error causes also should be ordered according to importance in order to reduce any tendency for bias introduced by typical linear evaluations made with AHP.The ordering of the causes in the case study example follows from the validation performed of the causal scheme with accident data in Table 3. Case Study This section presents a case study that demonstrates the practical application of the task analysis method in HRA.The case study is based on a simulator-training scenario with a focus on simultaneous activities, which augments the need to consider a wide set of performance requirements and causality descriptions in task analysis.The training scenario is relevant to practical application of the method because simulators are an important industry tool for the validation of drilling crews as qualified barrier elements in well operations.Simultaneous activities are defined as [55]: "Activities that are executed concurrently on the same installation, such as production activities, drilling and well activities, maintenance and modification activities, and critical activities".Critical activity is "any activity that potentially can cause serious injury or death to people, significant pollution of the environment, or substantial financial losses". The case study describes a scenario where drilling and crane operations are both occurring on a floating rig.The lifting operations will cause movement and tilting of the rig, which again obviously may affect situational elements on the rig floor and potentially also the behaviour of the drilling crew.As an example used in the case study, the mud circulation breaks during drill pipe connections, which may cause sufficient pressure drop in the wellbore to cause a kick influx.A smaller kick influx relevant to this scenario may be difficult to detect under these circumstances; namely, with a limited number of kick-indicating parameters and with pit level fluctuations occurring naturally due to rig movements.to this scenario may be difficult to detect under these circumstances; namely, with a limited number of kick-indicating parameters and with pit level fluctuations occurring naturally due to rig movements. HTA and SADT in Task Analysis 'Driller to activate the BOP in event of a well kick within 40 min' is the action used as the scenario to be analysed, and the embodiment of a representative HTA diagram is seen in Figure 7 Figure 8 illustrates the further SADT development of the HTA, which is the next step in the method.Unfortunately, governing documents, plans, and procedures relevant to this case study are not available to the public.The detailed task analysis may also easily become overly labour-intensive for the purpose of an article.Therefore, Figure 8 focuses on the Action 1.1.1branch.Figure 4 includes the critical importance assigned to respective Action tags 1.1.1-1.1.6 in the case study, and Action 1.1.1 is selected since it is categorized as essential to the operation.Figure 8 illustrates the further SADT development of the HTA, which is the next step in the method.Unfortunately, governing documents, plans, and procedures relevant to this case study are not available to the public.The detailed task analysis may also easily become overly labour-intensive for the purpose of an article.Therefore, Figure 8 focuses on the Action 1.1.1branch.Figure 4 includes the critical importance assigned to respective Action tags 1.1.1-1.1.6 in the case study, and Action 1.1.1 is selected since it is categorized as essential to the operation. Causal Classifications in Task Analysis Figure 9 shows the embodiment of a causal classification scheme made from task analysis with the HFE scenario development and with explicit use of terminology relevant to different task breakdown levels, as in system failure analysis [41].The concepts follow the structural levels of the HTA-SADT analysis, which provides logical HFE causality descriptions for the task failure scenario.Figure 9 naturally shows an undirected scheme, dislocated from the chain-of-event paradigm, where arrows show how concepts of human behaviour relate on all levels in an organisation. Causal Classifications in Task Analysis Figure 9 shows the embodiment of a causal classification scheme made from task analysis with the HFE scenario development and with explicit use of terminology relevant to different task breakdown levels, as in system failure analysis [41].The concepts follow the structural levels of the HTA-SADT analysis, which provides logical HFE causality descriptions for the task failure scenario.Figure 9 naturally shows an undirected scheme, dislocated from the chain-of-event paradigm, where arrows show how concepts of human behaviour relate on all levels in an organisation. Causal Classifications in Task Analysis Figure 9 shows the embodiment of a causal classification scheme made from task analysis with the HFE scenario development and with explicit use of terminology relevant to different task breakdown levels, as in system failure analysis [41].The concepts follow the structural levels of the HTA-SADT analysis, which provides logical HFE causality descriptions for the task failure scenario.Figure 9 naturally shows an undirected scheme, dislocated from the chain-of-event paradigm, where arrows show how concepts of human behaviour relate on all levels in an organisation. Apply QFD in Task Analysis The QFD approach in task analysis is applied to Operation 1.1 with an error cause of 'mistake' in the simulator training scenario.Tables 5 and 6 show the results produced from the evaluations in procedure Step 2 and Step 3, respectively.The results reflect that the social and cognitive requirements for personnel in crews and command chains should increase with the inaccuracy of the technology used as activity aids.For example, measurements that concern mud returns from the well will often be more irregular and inaccurate than measurements of mud flowing into the well during drilling.Table 7 shows the matrix from the HoQ 1 relationship evaluations.HEPs are linked to sets of traditional PIFs in HRA. The PIF sets are adopted from a cognitive basis [34].NUREG-2114 [34] presents a consolidation of human performance and cognitive research into a framework for human error causal analysis.The framework comprises five macrocognitive functions associated with CFMs.Teamwork is an example of one such function defined that is associated with over forty PIFs.A large PIF list becomes unwieldly in QFD and AHP, but we may note proximate causes defined as a means of grouping the PIFs in an evaluation.This is similar to the grouping of action error causes in the method proposed. The cognitive basis builds on concepts of perceptual cycle and sensemaking, which may be reasonable for causal analysis by trained experts who diligently follow procedures when performing control room tasks during internal, at-power events.These concepts suggest a causality focused on the long-term strategic and educational purpose of situation assessments, which involve recursive cognitive adaption to familiar control room scenarios ( [34], p. 76).Argued differently [18], the situation awareness concept may also consider situation assessments as fast and linear, as a basis for near-future actions directed at a novel, fast-paced, and noisy work environment.This may help explain the different definitions of mental factors noted between cognitive basis and Figure 5, and indicate a potential desire for a different cognitive basis in task analysis tailored to the HRA scope.The implications of workplace conditions in task analysis that follows from different cognitive concepts used in HRA is not addressed here, but could be of interest as further work.This also may concern performance requirements, which only considers a teamwork setting, since no individual can be made responsible for operating such complex power plants alone.For example, NUREG-2199 ([10], p. 16) only briefly discusses general requirements for task analysis, which are described by terms such as success requirements, cognitive requirements, maximum time requirements, task requirements, resource requirements, and physical requirements.Hence, the proposed method is more robust for task analysis for HRA for offshore drilling. Conclusions This article presents a novel method for explicitly linking the QFD and AHP concepts as systematic tools in task analysis for updating PIF weights in the HRA causal model.The method increases HRA procedure transparency and helps secure the consistent quality and performance of offshore drilling operation risk analysis.The method represents an improved tool for maintaining well control in cases where human task performance is crucial to well system risk. QFD and AHP are well-known concepts, and the method has been demonstrated for the task analysis of historic well accident data as well as in a realistic case study.However, caution is advised when generalizing results from sector regulations, accident data, and case studies.Future research that may apply to this proposed task analysis method in HRA includes: the use of (i) fuzzy set theory or similar to help augment the uncertainties in task analysis; or (ii) HRA causality descriptions that adopt different type performance requirements and cognitive basis. Figure 2 . Figure 2. Structure of hierarchical task analysis. Figure 3 . Figure 3.A functional block in a hierarchical task analysis and structured analysis and design technique (HTA-SADT) diagram. Figure 2 . Figure 2. Structure of hierarchical task analysis. Figure 2 . Figure 2. Structure of hierarchical task analysis. Figure 3 . Figure 3.A functional block in a hierarchical task analysis and structured analysis and design technique (HTA-SADT) diagram. Figure 3 . Figure 3.A functional block in a hierarchical task analysis and structured analysis and design technique (HTA-SADT) diagram. are the scores assigned to each class (upper right-hand box) and tag numbers relevant to classifications made of the actions considered in the case study in the next section.Safety 2018, 4, x 7 of 20 Figure 4 . Figure 4. Critical importance matrix of actions in task analysis. Figure 5 Figure 5 illustrates the causal classification scheme used for the task analysis.As can be seen, operator error mechanisms are divided into individual, workplace, and organisational PIFs.The PIFs Figure 4 . Figure 4. Critical importance matrix of actions in task analysis. Figure 5 . Figure 5. Generic causal classification scheme showing latent human error tendencies and workplace conditions as influencing factors associated with operator error causes (based on [1,19,21,48]). Figure 5 . Figure 5. Generic causal classification scheme showing latent human error tendencies and workplace conditions as influencing factors associated with operator error causes (based on [1,19,21,48]). Figure 6 . Figure 6.Quality function deployment for systematic evaluation of performance requirements between action and operation levels in task analysis using two quality houses. Figure 6 . Figure 6.Quality function deployment for systematic evaluation of performance requirements between action and operation levels in task analysis using two quality houses. and the submatrix normalised priority weight is defined as w I j = action error causes defined within each submatrix.The updating of a weight in submatrix j is defined as w and the normalised priority weight as w I I j = W I I j / 1 . HTA and SADT in Task Analysis'Driller to activate the BOP in event of a well kick within 40 min' is the action used as the scenario to be analysed, and the embodiment of a representative HTA diagram is seen in Figure7.Monitoring for changes in established well footprints and trends is given as the primary means available to the driller in search for indications of a kick.The monitored parameters include mud pit level, indicators of return flow such as flowmeter paddles or trip tank, rig pump pressure, rig pump speed, rate of drill bit penetration, drill bit torque, and the up and down weight of the drill string.If any of these parameters change, this may indicate that the well is kicking.If the driller acknowledges symptoms of a kick, the next step normally entails a diagnosis operation, denoted as a flow check (Operation 1.2).If the flow check confirms a kick, the next steps for the driller are to secure the well by confirmed closure of the BOP as indicated in Figure 7 by Subtask 2 and Operations 2.1 and 2.2.Safety 2018, 4, x 13 of 20 . Monitoring for changes in established well footprints and trends is given as the primary means available to the driller in search for indications of a kick.The monitored parameters include mud pit level, indicators of return flow such as flowmeter paddles or trip tank, rig pump pressure, rig pump speed, rate of drill bit penetration, drill bit torque, and the up and down weight of the drill string.If any of these parameters change, this may indicate that the well is kicking.If the driller acknowledges symptoms of a kick, the next step normally entails a diagnosis operation, denoted as a flow check (Operation 1.2).If the flow check confirms a kick, the next steps for the driller are to secure the well by confirmed closure of the BOP as indicated in Figure 7 by Subtask 2 and Operations 2.1 and 2.2. Figure 7 . Figure 7. HTA in task analysis of offshore drilling activity. Figure 7 . Figure 7. HTA in task analysis of offshore drilling activity. Figure 8 . Figure 8. HTA-SADT in task analysis of offshore drilling activity. Figure 9 . Figure 9. Causal analysis and human failure event (HFE) scenario development in task analysis of offshore drilling activity. Figure 8 . Figure 8. HTA-SADT in task analysis of offshore drilling activity. Figure 9 . Figure 9. Causal analysis and human failure event (HFE) scenario development in task analysis of offshore drilling activity. Figure 9 . Figure 9. Causal analysis and human failure event (HFE) scenario development in task analysis of offshore drilling activity. Table 1 . Literature overview of causality descriptions popular in oil and gas industry human reliability analysis (HRA). Table 3 . Well accident data causal classifications made in validation of task analysis. Table 4 . Evaluation of the house of quality 1 (HoQ 1) relationship matrix in a case study when scores represent triangle distributions.
2018-11-09T14:17:11.454Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "bf945a5b640dc416792cad37e58dd9a08ae4a60f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-576X/4/3/39/pdf?version=1536641554", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "bf945a5b640dc416792cad37e58dd9a08ae4a60f", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
14072662
pes2o/s2orc
v3-fos-license
Cost–Utility Analysis of Magnetic Resonance Imaging Management of Patients with Acute Ischemic Stroke in a Spanish Hospital Introduction Stroke has a high rate of long-term disability and mortality and therefore has a significant economic impact. The objective of this study was to determine from a social perspective, the cost–utility of magnetic resonance imaging (MRI) compared to computed tomography (CT) as the first imaging test in acute ischemic stroke (AIS). Methods A cost–utility analysis of MRI compared to CT as the first imaging test in AIS was performed. Economic evaluation data were obtained from a prospective study of patients with AIS ≤12 h from onset in one Spanish hospital. The measure of effectiveness was quality-adjusted life-years (QALYs) calculated from utilities of the modified Rankin Scale. Both hospital and post-discharge expenses were included in the costs. The incremental cost-effectiveness ratio (ICER) was calculated and sensitivity analysis was carried out. The costs were expressed in Euros at the 2004 exchange rate. Results A total of 130 patients were analyzed. The first imaging test was CT in 87 patients and MRI in 43 patients. Baseline variables were similar in the two groups. The mean direct cost was €5830.63 for the CT group and €5692.95 for the MRI group (P = not significant). The ICER was €11,868.97/QALY. The results were sensitive when the indirect costs were included in the analysis. Conclusion Total direct costs and QALYs were lower in the MRI group; however, this difference was not statistically significant. MRI was shown to be a cost-effective strategy for the first imaging test in AIS in 22% of the iterations according to the efficiency threshold in Spain. Electronic supplementary material The online version of this article (doi:10.1007/s40120-015-0029-x) contains supplementary material, which is available to authorized users. INTRODUCTION According with Global Burden of Diseases, Injuries, and Risk Factors Study [1], stroke was the second most common cause of death and the third most common cause of disability-adjusted life-years worldwide in 2010 [2]. Patients who survive a stroke have a higher risk of another stroke, ischemic heart disease, or dementia [3]. Stroke has a considerable economic impact during hospitalization and following discharge [4][5][6][7][8][9][10][11]. Major advances in acute stroke care include the creation of dedicated stroke units [12], thrombolytic therapy [13,14], and new diagnostic techniques, especially imaging techniques. Recent research into drugs for treating stroke is based on the identification of the diffusion-perfusion mismatch in magnetic resonance imaging (MRI). Despite technological advances in neuroimaging, computed tomography (CT) remains the examination of choice in patients with acute stroke [15]. MRI is more sensitive and more specific than CT in early detection of acute ischemic stroke (AIS) [16][17][18]; moreover, the variability in the interpretation of results is lower in MRI [16]. MRI in patients with acute stroke allows for a rapid diagnostic evaluation and provides necessary and relevant information [19]. Furthermore, MRI techniques are as effective as CT for ruling out or defining the magnitude of hemorrhage [20][21][22]. Thrombolysis based on MRI C3 h after stroke onset is safer and potentially more effective than thrombolysis based on CT within 3 h in patients with acute stroke [23,24]. However, MRI is more expensive and less widely available than CT. The current study aimed to determine, from a societal perspective, the cost-utility of MRI compared with CT as the first imaging test in patients with AIS. METHODS A cost-utility analysis from societal perspective was developed. A 90-day time horizon was considered for outcomes according to the hospital's stroke management protocol. Because the time horizon was less than a year, future discounting was not required. Alternatives Evaluated The alternatives evaluated were cranial CT and cranial MRI (diffusion, perfusion MRI, and angiography). These alternatives were selected because CT was the most used in the study period and MRI was the technology that was to be evaluated, for the advantages in sensitivity, specificity [16], vascular occlusion, and mismatch area. Patients were assigned to undergo CT or MRI as the initial imaging technique in function of the availability of scanners at the time of emergency room admittance: patients admitted between 8 a.m. and 8 p.m. on weekdays (except holidays) underwent MRI as the initial imaging test, whereas patients admitted between 8 p.m. and 8 a.m. on weekdays or at any time on weekends and holidays underwent CT. Hospital and Post-Discharge Costs According to the perspective of the study, both direct health costs and other patient's costs were included. The expenditure for the following resources used in the hospital was quantified for each patient: Cranial CT, cranial MRI, others diagnostic tests, physiotherapy, pharmacological treatment, and hospital stay. In addition, a questionnaire was used to obtain information from patients about the postdischarge resources used in the first 90 days after onset, for example, institutionalization, rehabilitation, home adaptations, caregivers, and pharmacological treatment. All expenditures were expressed in Euros (2004). The information source for the costs is shown in the results. Effectiveness The measure of effectiveness was qualityadjusted life-years (QALYs). QALYs were estimated from utility values obtained from the modified Rankin Scale (mRS) and the time (in years) that the patient remained in that health state (mRS). The mRS is a tool widely used to assess primary or secondary outcomes in multicenter studies of stroke. The scale is validated in several languages, including Spanish [25]. This scale has been used in other studies, including a clinical trial for a neuroprotective stroke agent, and reflects changes in the health status of patients [26]. The mRS was determined by a structured interview [27] before stroke, at hospital discharge, and 90 days after stroke. The investigator who assessed the mRS was trained and certified in the use of the mRS and was blinded to the diagnostic imaging test performed. A favorable clinical outcome was defined as an mRS score C2. For each value of the mRS, a utility value obtained from previous studies [28,29] was assigned. These utilities were used because they were obtained from the Spanish general population through different methods of measurement preferences. Each health state (mRS) was associated with a utility value [28]. As in that study [28], no utility value was assigned to patients with an mRS score of zero, it was decided to give a utility value of 0.90 that was obtained from a cost-utility analysis of recombinant tissue plasminogen activator (rt-PA) in patients with stroke [29]. Table 1 shows the utility value used in the current study for each mRS value. Incremental Cost-Effectiveness Ratio The total costs (hospital and post-discharge costs) for each study group were calculated. The incremental cost-effectiveness ratio (ICER) was calculated as follows: (Cost MRI-Cost CT)/ (QALY MRI-QALY CT). Clinical Data Collection In addition to the data for economic evaluation, the following variables were recorded: Sex, age, cardiovascular risk factors, prior treatment, date and hour of symptoms onset, and prior functional dependence. Stroke severity was determined daily by the National Institutes of Health Stroke Scale (NIHSS). Analysis Data were analyzed for associations between categorical variables with the Chi-square test. The comparison of medians was done with the non-parametric Mann-Whitney U test. The comparison of means was done with Student's t test. Statistical significance was set at 0.05. One-way and multi-way sensitivity analyses were performed. In the one-way analysis, utility parameters (obtained by visual analog scale [VAS]), indirect costs, including lost productivity from days off work (obtained in the interview with the patient or caregiver), and adjusted QALY (assuming the patient's initial mRS remained unchanged) were considered. The multi-way sensitivity analysis was performed using non-parametric bootstrapping [30]; a total of 1000 bootstrap samples were RESULTS Of 472 consecutive patients with stroke, 130 fulfilled the inclusion criteria. Of these, 87 patients underwent CT as the first imaging test and 43 patients underwent MRI. A total of 117 patients were alive 90 days after stroke and 3 patients were lost to follow-up. Baseline values did not differ between the two groups: 60% were male, most were retired, and 85% had an mRS score of 0 before stroke (Table 2). In both groups, hospital stay accounted for approximately 80% of hospital costs, and institutionalization accounted for nearly 45% of the post-discharge costs until 90 days after stroke (Table 3). On the other hand, no significant differences between the two groups were found in mRS (Table 4). Table 5 shows the results of the ICER analysis. The use of MRI was observed to be a less expensive alternative, but resulted in less QALY than CT for the diagnosis of AIS. In the one-way sensitivity analyses, the ICER increased with all (Table 6). In the ICER analysis performed with the bootstrap, the simulated cases mainly fall in quadrants III and IV (Fig. 1). This result is confirmed in the acceptability curve of costeffectiveness, where it can be appreciated that Patients whose discharge destination was a nursing home, rehabilitation center, or another hospital c Percentage of patients who needed to use the resource d Percentage of patients who needed to pay a caregiver or whose relatives left their jobs to care for them 22% of the iterations of the MRI result in a cost per QALY of €30,000, regarded as the limit of efficiency in Spain (Fig. 2). DISCUSSION The findings of the current study show that clinical outcomes at discharge and 90 days after stroke, as well as the total direct costs, were similar for patients in the two groups. Interestingly, although MRI examination was nearly four times more expensive than CT examination to assess AIS, the overall direct hospital costs were not higher in the group examined with MRI. These results are in line with those reported by Beinfeld and Gazelle [31], who found no increase in hospital costs between 1996 and 2002 despite a substantial increase in the use of CT and MRI. The median LOS in the stroke unit was lower in the MRI group which may be due to earlier diagnosis and initiation of treatment, and possibly attributable to greater confidence in the information provided by MRI. In both groups, the mean LOS was lower than previously reported values (9.2-26 days) [6,32,33]; however, these studies included patients with cerebral hemorrhage. Total direct hospital costs did not differ between groups, although the mean cost in the MRI group was slightly lower due to the shorter hospital stay in that group. As reported in other studies [6,7,11,[34][35][36], hospital stay was the largest single expenditure in the acute phase, particularly when the patient was in the stroke unit. In line with another study in stroke patients [10], only 24% of patients in the current study were employed when the stroke occurred. Most patients returned home after hospital discharge, also in agreement with the results of other studies [9,10]. This data was reflected in sensitivity analysis where when costs due to CT computed tomography, MRI magnetic resonance imaging, mRS modified Rankin Scale, SD standard deviation CT computed tomography, ICER incremental cost-effectiveness ratio, MRI magnetic resonance imaging, QALY qualityadjusted life-year lost productivity were included (days off work) the ICER was higher. More than half of the patients in the current study needed a caregiver; 71% of the caregivers were not paid, a proportion in line with the results of a previous study [9], in which 74% of the patients who required assistance were attended by family members or friends. Thus, informal care plays an important role in stroke. Approximately, 26% of the caregivers in the current study were family members who had to leave their jobs to care for the patient. In this study, the use of MRI rather than CT as the initial diagnostic tool for assessing stroke patients did not result in better outcome at discharge or 90 days after stroke. However, no changes were made in the treatment protocol for patients undergoing MRI for the initial assessment. Because rt-PA treatment in the first 3 h is based on the absence of hemorrhage or extensive infarct, this information can be reliably obtained with a simple CT study. There were no significant differences between the two groups in the distribution of patients treated with rt-PA. However, a recent study [37] found that using MRI-based penumbra to select patients for intravenous rt-PA after routine CT in patients with AIS increased costs, but was more costeffective. No significant differences between the groups in mRS was found. Likewise, no significant differences in the parameters that were calculated from mRS, such as utilities and QALYs, were found. ICER in the simulations varied widely due to the lack of significant differences in the effectiveness of the two techniques. When a sensitivity analysis using utility values obtained with the VAS was performed, the variation in effectiveness between the two groups remained minimal (it was even lower than in the analyses of the baseline data), so the ICER was higher. As is shown in the graph of the cost-effectiveness plane (Fig. 1), a considerable proportion of the results of this study were located in the third quadrant, indicating that MRI is less expensive, but also less effective in terms of QALYs. Thus, MRI is not considered a dominant alternative or a dominated alternative. However, in a proportion of the bootstrap results MRI would be located in quadrant IV, meaning that it was less effective and more costly than CT. Thus, MRI would be a dominated alternative. Discussing two systematic reviews of costeffectiveness of CT and MRI for some clinical [38] highlight that diagnostic imaging technologies can improve or expedite diagnosis of disease, but do not necessarily change outcomes. Indeed, many factors can affect a patient's outcome after imaging. The current study has some limitations. The small sample size may have made it difficult to detect some significant differences; however, a bootstrap analysis was performed to increase the power of the study. The method for assigning patients to the study groups depended on the time of onset of stroke, and it cannot be ruled out that this did not introduce a selection bias. Nonetheless, at admission, the groups were similar in neurological deficit, cardiovascular risk factors, and prior disability. The time horizon is important in economic evaluations. Here 90 days was used; this followup period is similar to other studies of the costs and of managing stroke [8,39,40]. In this sense, one study of the cost-effectiveness of thrombolytic therapy with alteplase [41] concluded that thrombolytic therapy based on MRI is not cost-effective in the short term; however, in the long term (at 3 years and and reproduction in any medium, provided the original author(s) and the source are credited.
2018-04-03T03:49:57.482Z
2015-05-19T00:00:00.000
{ "year": 2015, "sha1": "0115c23d77babf2fe6272764c63ada29c0446b9b", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40120-015-0029-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0115c23d77babf2fe6272764c63ada29c0446b9b", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
85532440
pes2o/s2orc
v3-fos-license
Estrogen receptors α and β in the central amygdala and the ventromedial nucleus of the hypothalamus: Sociosexual behaviors, fear and arousal in female rats during emotionally challenging events Estrogens receptors (ER) are involved in several sociosexual behaviors and fear responses. In particular, the ERα is important for sexual behaviors, whereas ERβ modulates anxiolytic responses. Using shRNA directed either against the ERα or the ERβ RNAs (or containing luciferase control) encoded within an adeno-associated viral vector, we silenced these receptors in the ventromedial nucleus of the hypothalamus (VMN) and the central amygdala (CeA). We exposed ovariectomized female rats, sequentially treated with estradiol benzoate and progesterone, to five stimuli, previously reported to elicit positive and negative affect. The subjects were housed in groups of 4 females and 3 males in a seminatural environment for several days before hormone treatment. We analyzed the frequency of a large number of behavior patterns. In addition, we performed analyses of co-occurrence in order to detect changes in the structure of behavior after infusion of the vectors. Silencing the ERα in the VMN disrupted lordosis and showed some anxiolytic properties in aversive situations, whereas silencing of the ERβ in this structure had no effect. This was also the case after silencing the ERα in the CeA. Silencing of the ERβ in this structure increased risk assessment, an expression of anxiety, and increased olfactory exploration of the environment. We hypothesize that the ERβ in the CeA has an important role in the well-established anxiolytic effects of estrogens, and that it may modulate arousal level. Furthermore, it seems that the ERα in the VMN is anxiogenic in aversive or threatening situations, in agreement with other studies. Introduction Estrogen receptors (ERs) play an important role in the modulation of female sexual and social behaviors. The ERα is crucial for sexual behaviors, determining both receptivity and sexual approach behaviors [1][2][3][4][5][6]. These effects are mediated by the ventromedial hypothalamic nucleus (VMN) [5,7], and silencing of ERα in this brain area results in diminution or suppression of the lordosis response in females rats and mice. To the contrary, the ERβ does not seem to be involved in female sexual behaviors [4,8,9]. In addition to their effects on female sexual behaviors, estrogens have anxiolytic properties in several standard tests, for example the elevated plus-maze [10], the light/dark choice procedure [11], or the open field [12]. These effects are usually attributed to the ERβ [12][13][14][15] whereas the ERα is considered to promote anxiety. Silencing of the ERα decreased indicators of fear in a light/dark choice test [7] and an ERα agonist increased fear-potentiated startle [16], just to mention two examples. However, there are also reports of anxiolytic effects of the ERα [17]. The conflicting results could be reconciled by proposing that the ERα has a context-dependent, dual effect on anxiety, being anxiolytic in safe environments and anxiogenic in threatening ones [18]. In a previous study [1], we made a detailed description of the behavioral effects of an ERα-and an ERβ agonist in female rats living in a seminatural environment in which emotional challenges could be introduced. We found that the ERα agonist propyl-pyrazole-triol (PPT) increased fear reactions in threatening contexts (white noise and fox odor) only. The ERβ agonist diarylpropionitrile (DPN) had some anxiolytic effects in these contexts. In our earlier study, the ER agonists were administered systemically, precluding any speculations as to their site of action. Now, we evaluate the role of the ERs in specific brain areas by silencing the expression of either the ERα or the ERβ with local administration of shRNA directed against each of these receptors. One target site was the VMN. The ERα within this nucleus is essential for female sexual behaviors and has been reported to be one site of action for the anxiogenic effects of this receptor [7,19]. It was originally reported that the VMN contains a large number of ERα but very few, if any, ERβ [20]. However, later studies revealed that the ERβ indeed is expressed in the VMN in adult animals, at least in the ventrolateral portion [21,22]. The behavioral function of this receptor within the VMN has not been evaluated. Even though it is unlikely that sexual behavior would be modified by silencing the ERβ, emotional responses to environmental disturbances might be modified. The central amygdala (CeA) is the main source of output from the basolateral and medial amygdala [23] and has been found to be important for fear and anxiety responses [24,25]. It appears that corticotropin-releasing hormone(CRH)-containing neurons in this structure mediate these responses [26] in addition to their well-known role in physiological stress reactions [27]. The central amygdala expresses both ERα and ERβ, but the latter seems to be more abundant [20,28,29]. It has been reported that local administration of a glucocorticoid agonist into the CeA is anxiogenic, and that this response is reduced after systemic treatment with an ERβ agonist [30]. Even though these data do not show that the agonist acted within the CeA when reducing anxiety, it is possible to suggest that the ERβ within the CeA modulates anxiety responses. Furthermore, the enhanced expression of CRH in that area observed following systemic treatment with kainic acid is reduced by estradiol [31]. These findings show that neurons in the CeA are responsive to estradiol, perhaps resulting from the activation of the ERβ. Possible functions of the ERα in this structure remain unknown. In order to evaluate the question of context-dependent responses to site-specific alterations in the activity of estrogen receptors, we exposed groups consisting of both male and female rats living in a seminatural environment to different emotion-inducing stimuli. Either the ERα or the ERβ was silenced in the VMN or the CeA. The emotion-inducing stimuli employed have previously been shown to elicit different behavioral responses, presumably associated with different emotions [1], These stimuli were lavender odor and chocolate flavored-food, known to produce a state of positive affect [32][33][34]. We also used white noise and fox odor in order to produce fear responses and an aversive emotional state [35,36]. Finally, a piece of music was played to the rat. The particular piece used here has been reported to produce estrogen-dependent anxiolysis on the elevated plus-maze and in the light-dark transition test [37], although we have found that it produces a slight fear reaction in the seminatural environment [1]. The potentially aversive properties of music were not known at the time the present experiment was run. The proposed experiment would provide a picture of the potential importance of the estrogen receptors in the VMN and the CeA for emotional responses in safe as well as in threatening contexts in a procedure with external validity. Perhaps this could have some bearing on the issue of human sex differences in the prevalence of anxiety, depression and some other neuropsychiatric disorders [38,39]. Subjects A total of 64 female and 48 male Wistar rats (200 g and 250 g respectively upon arrival) were obtained from Charles River (Sulzfeld, Germany). The rats were housed in same-sex pairs in standard cages (Macrolon IV, 43 × 26 x 15 cm, l x w x h) prior to the beginning of the experiment, with water and food (RM1, Special Diets Services, Witham, UK) available ad libitum. The ambient sound level averaged 40 dB due to the ventilation system; the temperature was maintained at 21 ± 1°C, and the humidity to 55 ± 10%. The rats were submitted to an 12L:12D h reversed light cycle, lights being on between 11:00 pm and 11:00 am. Surgery Three weeks before the beginning of the experiment, the females underwent ovariectomy and stereotaxic surgery under anesthesia with a ketamine/xylazine cocktail (10 and 100 mg/kg, respectively). Immediately after ovariectomy, the females were fixed in a stereotaxic frame, and a small incision was made on top of the skull. Bilateral cannulae (30 gauge) were aimed at either the VMN (coordinates: anteroposterior: -2.56; mediolateral: ± 0.55; dorsoventral: -9.50) or the CeA (anteroposterior: -2.30; mediolateral: ± 4.00; dorsoventral: -7.00). Coordinates were based on the Paxinos and Watson atlas [40]. To silence estrogen receptors, we used a short hairpin RNA (shRNA) encoded with an adeno-associated viral (AAV) vector. The females were bilaterally infused with 1 μl of an AAV vector directed either against the ERα (AAV-ERα) or the ERβ (AAV-ERβ). Control animals received an AAV vector that encoded a firefly luciferase (AAV-luc). This vector does not affect the estrogen receptors. All vectors contained an independent enhanced green fluorescent protein (EGFP). The shRNAs against ERα as well as luciferase employed here have been previously described in detail [5]. The shRNA against ERβ has been described more recently [41]. Both the AAV-ERα and the AAV-ERβ have been shown to silence the intended receptor without affecting expression of the other. The infusion lasted 10 min (infusion rate: 0.10 μl/min) (Hamilton syringe and infusion pump), and the infusion cannulae were carefully withdrawn 10 min after the end of the infusion. Fentanyl, 0.05 mg/kg every 12 h, was provided for 72 h postoperatively. Hormone treatment Estrus was induced by sequential treatment with 17β-estradiol benzoate (EB, 18 μg/kg) followed by progesterone (P, 1 mg/rat) 48 h later (both were obtained from Sigma Aldrich, St Louis, MO), and dissolved in peanut oil (Den norske eterfabrikk, Norway). The hormones were injected subcutaneously in a volume of 1 ml/kg for EB and 0.2 ml/ rat for P. Apparatus During the experiment, the rats were housed in a seminatural environment composed of an open area (210 × 120 × 80 cm, l × w × h) connected to a complex burrow system by 4 small openings (Fig. 1). The burrow system was maintained in the dark while the open area was submitted to the same light cycle as previously mentioned. The light intensity was 180 lx during the day and 30 lx during the night. Dawn and dusk were stimulated by 30-min light transitions. The ambient sound level, humidity and temperature were the same in both parts of the seminatural environment. The entire floor of the seminatural environment was covered with wood chips (Tapvei, Harjumaa, Estonia). Bedding material (Happi mat, Datesend, Manchester, UK) was provided in the nest boxes in the burrow. Wood sticks (Tapvei) were added as enrichment to the open area, and 3 red polycarbonate huts (Datesend, Manchester, UK) were disposed on the floor. Four bottles (1.5 l each) dispensed tap water in a corner of the open area and about 2 kg of the habitual food was put on the floor. Two nozzles in the walls were connected to an odor distribution system (Olfactory Stimulus Package, Medical Associates, Georgia, Vt) and produced a constant air stream of 3 l/min. One nozzle was located in the long, back tunnel in the burrow and the other in the far wall of the open area. In addition, a sound system consisting of two A60 stereo speakers (Creative, Clas Ohlson, Norway), one in each part of the environment, was installed. Two cameras fixed about 2 m above the floor filmed the entire experiment. In the burrow, infrared lights (850 nm) allowed for video recording. The Media Recorder (Noldus, Wageningen, The Netherlands) was used for creating and storing the video files. This experimental setup has been described earlier [1,[42][43][44]. Emotion-inducing stimulations The rats were exposed to 5 experimental stimuli. Each of these stimuli have previously been shown to elicit different behavioral patterns, probably caused by different emotions [1]. The emotion-inducing stimuli chosen were either positive or negative to the rats, and stimulated different sensory modalities: olfactory, auditory or gustatory. The stimuli, in the order of presentation, were: 1 Lavender odor from 1.5 ml of Lavandula angustifolia essential oil (AromaBio, Lyon, France) replaced the room air stream through the nozzles (30 min). The odorant was put on a cotton pad in a glass jar. This stimulus has been reported to be anxiolytic in rats and humans [33,45] [32,34], and is known to produce positive affect [48]. 4 White noise produced by a noise generator (Lafayette instruments, Lafayette, IN) at 90 dB for 7.5 min. This stimulus is routinely used for inducing strong fear reactions in rats [35,49]. 5 Fox odor from 35 μl of 2,5-dihydro-2,4,5-trimethylthiazoline (TMT; Contech, Delta, BC, Canada) for 30 min. The odor distribution system used to produce lavender odor was used also here. TMT odor produces fear reactions similar to those produced by exposure to a living predator [50,51]. Procedure On day 0 at 9.00 a.m. the rats were weighed and shaved in different patterns on the back. In addition, black marks were made on the tail. Thereby it was possible to identify the individuals on video. At 1.00 pm, the rats were released from their cages into the seminatural environment. On day 5 at 9.00 a.m. the females were captured and injected with EB. On day 7 at 9.00 a.m. the rats were captured again and injected with P. Four hours later, at 1.00 pm the sequence of emotional stimuli was initiated. There was a 50-min interval between the end of one stimulus and the start of the following. The order of presentation of the emotional stimuli was kept constant throughout the experiment. The reason for this, as well as possible consequences, have been discussed in detail elsewhere [1]. Briefly, there are reasons for believing that the effects of the stimuli would have dissipated during the 50-minutes inter-stimuli interval. The exception is fox odor, which may cause behavioral alterations for several hours following exposure [50]. Therefore, this was the last stimulus to be applied. It must also be pointed out that rats, in their natural habitat, are likely to be exposed to a sequence of events, both attractive and aversive, during the course of one single night. Thus, it can be maintained that the exposure to several kinds of stimulations used here increases the external validity of the procedure. Behavioral observations Observation of the females' behavior was limited to the last 15 min of lavender and fox odor exposure, when the odor should have full behavioral effects [52,53]. This was also the case for exposure to music. During the availability of chocolate, the first 15 min were observed. This made it possible to determine the immediate response to an attractive stimulus. Moreover, most of the chocolate was consumed during this period. The entire 7.5 min of noise exposure was observed. We also observed behavior during the 7.5 min period following the end of white noise. Possible treatment effects on the recovery of pre-noise behavior could thereby be established. Behavior was scored according to a slightly modified version of the ethogram used in a previous study [1] (Table 1), using the Observer XT 12.5 (Noldus, Wageningen, The Netherlands). Design Fifteen groups of 7 rats each (4 females and 3 males) were successively run in the seminatural environment. Thus, a total of 60 females In each group of the seminatural environment, all females had different treatments. Apart from this, the 6 treatments were randomly distributed among the 15 groups of rats run in the seminatural environment. Immunocytochemistry The day after the experiment was terminated, the animals were euthanized with an overdose of pentobarbital. They were perfused with PBS followed by 4% paraformaldehyde. We removed the brain and stored it in paraformaldehyde at +4°C overnight. The following day, the brains were transferred to 10% sucrose in PBS, the subsequent day to 20% sucrose in PBS and the third day to 30% sucrose in PBS, where they were kept for seven days. The brains were then frozen in isopentane cooled on dry ice, and stored at −80°C until processing. The brains were frozen-sectioned in 40 μm slices with a sledge microtome (SM2000R, Leica Microsystems Nussloch, Germany), and the VMN and the CeA sections were collected and processed in accordance with a conventional free-floating protocol. Two sessions of immunocytochemistry were run, one for the brains that had received AAV-ERα, and one for the brains that had received AAV-ERβ. The appropriate sections were treated with antibodies against ERα (c1355, polyclonal, 1:25000, Merk Millipore, Germany) and EGFP (ab6673, GFP, 1:5000, Abcam, Cambridge, MA) in combinations with secondary antibodies (BA1000, biotinylated rabbit, Vector laboratories, Burlingame, CA) and avidin-biotin peroxidase complex (PK-6101, ABC elite kit from Vector laboratories, Burlingame, CA) to identify cells containing ERα, and injection localization, respectively. After antibody reactions and several washings in PBS, sections were stained with diaminobenzidine (DAB). DAB revealed injection localization by brown coloration of EGFP while the ERα was colored in dark purple by the addition of nickel. For the second session, the appropriate sections were treated with antibodies against ERβ (PA1-310B, polyclonal, 1:1000, ThermoFisher Scientific, San Jose, CA) and EGFP (ab290, 1:1000, Abcam, Cambridge, MA). The PA1-310B does not bind to the ERα [54], and it has been used to quantify ERβ expression in many studies [17,55,56]. The same secondary antibodies and avidinbiotin peroxidase complex were used as previously. After antibody reactions and washing in PBS, sections were stained with DAB, revealing the injection localization in brown coloration, and ERβ in blue, by the addition of cobalt chloride. For counting purposes, microphotographs of the stained sections were taken using an Axiophot photomicroscope (Carl Zeiss, Obercochen, Germany) connected to a digital camera (Nikon DS, Nikon, Tokyo, Japan). Then, the pictures were transferred to a computer and opened with Photoshop (Adobe Photoshop CS6). We selected three sections per individual and manually counted the density of ERs (number of ER/mm 2 ) by dividing the number of stained cells counted by the surface of each nucleus. Non-social and maintenance behaviors Resting alone; f,d Rests immobilized in relaxed position at a distance longer than one rat to a conspecific. Drinking; f,d Self explanatory. Eating regular food; f.d Self-explanatory. Selfgrooming and scratching; f,d Self explanatory. Fear-and anxiety-related behaviors Hide alone b ; f,d Immobilized in a corner or nest box at a distance longer than one body length to another rat. Huddling b ; f,d Immobilized in a corner or in a nest box in close contact with one or several other rats. Data preparation and statistical analysis We recorded the time spent in the burrow system and the open area, as well as the frequency of transitions between the zones of the seminatural environment (Fig. 1B). The frequency and, whenever possible, total duration of each behavior displayed was determined. Then we evaluated the stability of behavior during the observation period. We randomly picked some of the behaviors described in the ethogram (Table 1), and compared their frequency and duration in the first and last minute of observation with paired t-test. For example, nose-off frequency and duration were stable across the observation period, both for females infused in the VMN and the CeA (p > 0.446). Likewise, frequency and duration of paracopulatory behaviors (p > 0.357), sniffing another rat (p > 0.537) or self-grooming (p > 0.083) were stable across the observation. Therefore, the raw data for each behavior was converted into number per minute and duration per minute of observation. This made it possible to compare stimuli with different length of observation period. The aim of some comparisons was to determine whether behavior during one stimulus indeed differed from the behavior during the others. To that end, we compared the target stimulus to the mean of the four other stimuli. This was done by calculating a difference quotient in the following way: difference quotient= [(target stimulus -mean of the four other stimuli)/mean of the four other stimuli]. If behavior during exposure to the target stimulus was identical to the mean of the other stimuli, the difference quotient would be zero. The larger the deviation from zero, the larger the effect of the stimulus compared to the other stimuli. Data are presented as the difference quotient. This procedure has been used earlier to determine the effect of emotion-inducing stimuli on behavior [42]. In order to avoid any potential confounding effect of the treatment, this analysis was based exclusively on frequency data from the rats treated with AAV-luc in the VMN and the CeA. Since the differences between stimulus effect on behavior frequency and duration were marginal, we present only the difference in behavior frequency, since this parameter was available for all behaviors. Only the time spent in the burrow, in the open area, and in the openings (i.e. risk assessment) are expressed as durations. In addition, data collected during the last 50 s of white noise exposure as well as for the 7.5 min directly following white noise offset were divided into 10 intervals of 50 s each. This allowed for an analysis of the behavioral recovery following the end of white noise exposure. The number of ERα and ERβ in the females infused with vectors directed against these receptors, in the VMN and the CeA, was compared to their respective controls (AAV-luc VMN and AAV-luc CeA, respectively) with the t-test for independent samples. To assess if behavior during a specific stimulus differed from the mean of the other stimuli, we submitted the difference quotient to a one-sample t-test comparing the obtained value to 0. The p-value was adjusted with the Bonferroni correction to the 5 comparisons made, corresponding to the 5 stimuli. When the use of the one-sample t-test was not possible because of non-normal data distribution according to the Shapiro-Wilk test, we used the Wilcoxon one-sample test, and the Bonferroni correction. For the evaluation of the effects of gene silencing, the data concerning the VMN and the CeA were analyzed separately. For these analyses, both behavior frequency and duration were considered. When possible, we used two-way ANOVA with stimulus as within-groups factor and treatment as between-groups factor, followed by the Tukey HSD post hoc test. In case of significant interaction between treatment and stimulus, simple main effects were analyzed. When the data deviated from the normal distribution according to the Shapiro-Wilk test, we analyzed the effect of the treatment with Kruskal-Wallis ANOVA, followed by the Conover post hoc test. Finally, the probabilities to display lordosis and to flee the white noise at its onset were analyzed with the binomial test, which p-value was adjusted to the two comparisons made (AAV-ERα to AAV-luc, and AAV-ERβ to AAV-luc). To determine behavioral recovery after white noise, data from the last 50 s of white noise exposure until 7.5 min after its offset were analyzed using a mixed two-way ANOVA with time interval as withingroups factor and treatment as between-groups factor, followed by the Tukey HSD test. Simple main effects were analyzed after significant interaction between treatment and time interval. When the data deviated from the normal distribution according to the Shapiro-Wilk test, we analyzed the effect of treatment with Kruskal-Wallis ANOVA and the effect of time intervals with Friedman's ANOVA. In case of significance, post hoc differences were analyzed with the Conover test. Only differences from the last 50 s of white noise exposure are reported. The significance threshold was P < 0.05. Statistical analyses were conducted with IBM SPSS Statistics, version 24 and R, version 3.4.3 (core and PMCMRplus packages). Co-occurrence analysis The seminatural environment allows the rat to express a substantial amount of their behavioral repertoire. The resulting behavioral observation produced a list of behaviors in chronological order, for each individual observed. Using a moving window of 4 behavioral items, we determined how often one behavior item occurred together with another in the same window. This was defined as a co-occurrence. Based on the relative frequency of co-occurrences of one behavior together with each of the other behaviors, clusters of significantly co-occurring items could be established as statistically independent profiles of items [57]. Descending hierarchical classification determined the probability, as evaluated by χ 2 analysis, for an item to be more present in one cluster than in any of the other clusters [58,59]. Co-occurrence clusters were visualized using the Fruchterman-Reingold algorithm, with the Iramuteq software (Interface de R pour les Analyses Multidimensionnelles de Textes et Questionnaires, available at http://www. Iramuteq.org/). This procedure has been found to offer valuable information concerning the structure of behavior, and it has been extensively described elsewhere [1,42]. This analysis could be based either on the entire data set, or on data obtained during a particular emotion-inducing stimulus and/or from animals receiving a particular treatment. Histology Females with a reduction of the number of targeted receptor of more than 80% with respect to the appropriate control were included in the analyses. A low reduction could have resulted from a misplaced cannula, or a low viral transduction in the target area. Forty-four females satisfied the criterion of a 80% reduction minimum. The location of the infusion site is shown in Fig. 2. The slices from two females treated with AAV-ERα in the VMN were of poor quality and ICC were not performed. However, none of these females responded with lordosis to the males' mounts. We have previously reported that this behavioral response is a biomarker of a substantial reduction of the number of ERα in the VMN [19,60,61]. Therefore, we included these females in the AAV-ERα VMN group. The females were distributed as follows: AAV-luc-VMN n = 10; AAV-ERα-VMN n = 7; AAV-ERβ-VMN n = 6; AAV-luc-CeA n = 10; AAV-ERα-CeA n = 7; AAV-ERβ-CeA n = 6. In the included females, we observed a reduction of 94% in the number of ERα in the CeA (t (9) = 8.98, p = 0.011) and a reduction of 95% of ERα in the VMN (t (13) = 14.13, p < 0.001) (Fig. 3A). For ERβ, we achieved a 83% reduction in the CeA (t (9) = 16.12, p < 0.001) and a 84% reduction in the VMN (t (8) = 12.79, p = 0.001) (Fig. 3B). Effect of the emotional stimuli (Table 2) One-sample t-tests were used to determine whether the difference quotient obtained for each of the recorded behaviors during each emotion-inducing stimulus differed from 0. Only animals infused with AAV-luc were used, and the CeA and the VMN groups were pooled. Exposure to lavender odor increased the transitions in the open area (t (19) = 3.666, p = 0.008) and decreased the time spent in the burrow (t (19) = 2.873, p = 0.049). In addition, females sniffed the males more frequently during this stimulus (t (19) = 3.355, p = 0.017), but nosed-off other females less frequently (t (19) = 3.220, p = 0.023). We found no other significant effect of the lavender odor compared to the mean of the other stimuli (all p's > 0.084) ( Table 2). During exposure to chocolate, the time spent in the burrow was strongly decreased (t (19) = 55.587, p < 0.001). The frequency of resting with another rat was also significantly decreased (V (19) = 2.703, p = 0.035), but the other behaviors were not modified (all p's > 0.106) ( Table 2). During exposure to white noise, the number of transitions in the burrow, as well as the time spent in the burrow, were increased (respectively: t (19) Finally, the anti-social behaviors nose-off and fleeing from other females were increased (respectively: t (19) = 4.861, p = 0.001; t (19) = 3.435, p = 0.014). We observed no difference in these behaviors when they were directed to males (p > 0.161). The remaining behaviors were not significantly impacted by white noise (all p's > 0.053) ( Table 2). Finally, during exposure to fox odor, the number of transitions was decreased both in the open area (t (19) Effect of treatment on sexual behavior Sexual behaviors deviated from the normal distribution according to Shapiro-Wilk's test. Therefore, the effects of treatment on these behaviors were analyzed with the non-parametric Kruskal-Wallis ANOVA. We found no effect AAV-ERα or AAV-ERβ infusion in the CeA on sexual behaviors (all p's > 0.060). In the VMN, females belonging to the AAV-ERα group had a lower probability to display lordosis than females from the control group, all emotion-inducing stimuli collapsed (binomial test, p = 0.024) (Fig. 4A). When looking at the specific emotion-inducing stimuli, treatment with AAV-ERα reduced the probability to display a lordosis during exposure to lavender (binomial test, p = 0.038) and chocolate (binomial test, p = 0.038), but not during exposure to music, white noise and fox odor (all p's > 0.45) (Fig. 4C). Despite the reduction in the probability to display lordosis, the lordosis frequency itself was not significantly reduced (χ 2 , N=23 = 2.339, p = 0.310) (Fig. 4B). Similarly, the decrease in mounts received and paracopulatory behaviors did not reach significance (all p's > 0.350). Likewise, the lordosis quotient and the rejection frequency were not affected by treatment (all p's > 0.819) (data not shown). Effect of treatment on pro-and anti-social behavior We did not find any effect of AAV-ERα nor AAV-ERβ infusion in the CeA on pro-and antisocial behaviors, whether directed to males or to other females (all p's > 0.076) (data not shown). No effect on these behaviors was obtained when females were infused in the VMN (all p's > 0.130) (data not shown). Effect of treatment on exploratory behavior Treatment in the CeA influenced olfactory exploration of the seminatural environment according to the two-way ANOVAs for repeated measures (emotion-inducing stimulus x treatment), all observation collapsed. We found a main effect of treatment on the duration of sniffing the floor (F 2,20 = 3.787, p = 0.040). Females infused with AAV-ERβ spent more time sniffing the floor than the controls (p = 0.032) (Fig. 5A). We also found an interaction between treatment and stimulus in the duration of sniffing the floor (F 8,80 = 6.125, p = 0.025). Analysis of simple main effects within each stimulus showed an effect of lavender exposure (F 2,20 = 8.420, p = 0.002). During exposure to lavender odor, females infused with AAV-ERβ spent more time sniffing the floor than the controls (p = 0.010) (Fig. 5B). No other significant interaction between treatment and stimulus was found (all p's > 0.200). Behaviors specific to chocolate exposure (sniffing, eating, grabbing) and olfactory exploration (sniffing the nozzles during lavender or fox odor) showed no effect of treatment (all p's > 0.221). In the VMN, treatment modified the rearing frequency (F 2,20 = 1.598, p = 0.030), but post hoc tests did not reach significance (all p's > 0.052) (data not shown). Treatment in the VMN did not modify chocolate-specific behaviors nor sniffing the nozzles (all p's > 0.610). Effect of treatment on fear-and anxiety-related behavior ANOVA of the data from females treated with the viral vectors in the CeA showed that the duration of risk assessment was modified by the treatment (F 2,20 = 4.150, p = 0.031). Females treated with the AAV-ERβ spent more time displaying risk assessment than the controls (p = 0.027) (Fig. 6A). However, anxiety-related behaviors specific to white noise (freezing, hiding alone, huddling, startle, flight) showed no influence of treatment (all p's > 0.304). For females infused in the VMN, a behavior specific to white noise exposure, huddling, showed a treatment effect (F 2,20 = 7.914, p = 0.003;). Females treated with AAV-ERα had a reduced frequency of huddling compared to controls (p = 0.002) (Fig. 6B). The viral vectors did not modify other anxiety-related or white noise-specific behaviors (all p's > 0.065). Effect of treatment on non-social, maintenance behaviors Treatment in the CeA modified the frequency of eating (H 2, N=21 = 5.999, p = 0.050). Females treated with the AAV-ERβ ate food less often than the controls (p = 0.016) (Fig. 6C). No effect of treatment was found on the behaviors drinking, resting or self-grooming (all p's > 0.076). We did not find any effect of infusion in the VMN on these behaviors (all p's > 0.279). 3.3.6. Co-occurrence analysis of behavior in the CeA groups (Fig. 6) AAV-ERα, AAV-ERβ and AAV-luc appeared in distinct clusters at each emotion-inducing stimuli except white noise. AAV-luc was mostly associated with the non-social behaviors drinking, eating food and selfgrooming, and the exploratory behavior rearing during exposure to all the emotion-inducing stimuli. The cluster containing AAV-ERα included the sexual behaviors during exposure to lavender and fox odor (Fig. 7A-E). AAV-ERβ was associated with risk assessment during exposure to lavender and music, and sniffing the nozzles during these two stimuli as well as during exposure to chocolate and white noise. During exposure to chocolate, AAV-ERβ was associated with most chocolate-specific behaviors (Fig. 7C). Only during exposure to white noise, AAV-ERα and AAV-ERβ appeared in the same cluster, together with fear-related behaviors hiding alone and fleeing the noise (Fig.7D.). VMN groups (Fig. 8) AAV-luc was consistently associated with the non-social behavior resting alone, and was associated with sexual behaviors at each stimulus except fox odor. During noise, AAV-luc was found in the same cluster as noise-specific behavior huddling (Fig. 8D). AAV-ERα was associated with rejection at all stimuli except chocolate (Fig. 8C). During fox odor, AAV-ERα and AAV-luc appeared in the same cluster associated with exploratory behaviors (Fig. 8E). AAV-ERβ appeared in the same cluster as AAV-ERα during exposure to lavender odor and music ( Fig. 8A-B). AAV-ERβ was associated with risk assessment during lavender odor, music and white noise. During exposure to white noise and fox odor, AAV-ERβ was found in the same cluster as the anti-social behaviors nose-off and fleeing from another rat (Fig. 8D-E). Effects of ER knockdown on recovery from white noise White noise caused numerous alterations in the females' behavior, as described above. Even though the viral vectors only affected huddling during this stimulus, it is possible that the recovery from the treatment-independent effects indeed could be affected by the treatment. Central amygdala When data satisfied normal distribution criteria according to Shapiro-Wilk's test, and when the error variances were homogenous according to Hartley's Fmax test, two way ANOVAs on one factor (time interval) and independent measures on the other (treatment) were performed. We did not find any main effect of treatment on behavioral changes after exposure to white noise (all p's > 0.077). Furthermore, ANOVAs did not find any interaction between treatment and time intervals (all p's > 0.101). For behavior not satisfying criteria for parametric analysis, Friedman's ANOVA found an effect of time intervals on the frequency of nose-off to other females (χ 2 , df=9 = 19.049, p = 0.025), as well as on the frequency and duration of paracopulatory behaviors (frequency: Ventromedial nucleus of the hypothalamus Behavioral data of females infused in the VMN were analyzed with the same methods as that of females infused in the CeA. For females infused in the VMN, we found an effect of treatment on huddling during the period of recovery from white noise (frequency: H 2, N=23 = 8.750, p = 0.013; duration: H 2, N=23 = 8.591, p = 0.014). Females treated with AAV-ERα had a reduced frequency (p = 0.006) and duration (p = 0.006) of huddling compared to controls (Fig. 9A). Analysis of treatment effect at each time interval showed that both AAV-ERα and AAV-ERβ groups had a lower huddling frequency than controls during white noise exposure (AAV-ERα-AAV-luc, p < 0.001; AAV-ERβ-AAVluc, p = 0.006) (Fig. 9B). Only the AAV-ERα group differed from controls in the duration of huddling. Females treated with AAV-ERα spent less time huddling than the control during the last interval of white noise exposure (p = 0.001) and the first interval after white noise offset (p = 0.011) (Fig. 9B). In addition, time intervals influenced the huddling frequency (χ 2 , df=9 = 45.265, p < 0.001) and duration (χ 2 , df=9 = 37.823, p < 0.001). In both cases, all intervals but the first after white noise offset showed less huddling than during the white noise (all p's < 0.03) (Fig. 9B-C). Sniffing the floor increased after white noise offset (frequency: F 2,9 = 3.243, p = 0.002; duration: F 2,9 = 2.675, p = 0.008). Notably, this behavior was more frequent and lasted longer between 50 and 150 s following white noise offset (all p's < 0.05) (Fig. 9D). Finally, the time spent in the open area increased after the offset (F 2,9 = 2.798, p = 0.006). This increase became significant from 350 to 450 s after the offset (all p's < 0.05) (Fig. 9E). (Fig. 10) Following exposure to white noise, AAV-ERα, AAV-ERβ and AAVluc in the CeA appeared in distinct clusters. AAV-luc was associated with sexual, prosocial and non-social behaviors. AAV-ERα was found in the same cluster as exploratory behaviors, while AAV-ERβ was associated with anti-social behaviors and risk assessment (Fig. 10A). Co-occurrence analysis AAV-luc in the VMN was associated to sexual behaviors and risk assessment. AAV-ERα appeared linked to rejection and resting alone. AAV-ERβ formed a distinct cluster with various behaviors: sniffing the floor, eating food, huddling and resting with another rat (Fig. 10B). Different emotional challenges elicit different behavioral patterns The behavioral modifications induced by the different stimuli indicate that different emotional states were elicited in the female rats. Lavender increased exploration of the open area and stimulated olfactory investigation of males. Music reduced locomotory activity in the burrow and generally decreased olfactory exploration, as well as risk assessment. Chocolate was mainly characterized by chocolate-related behaviors and decreased the presence in the burrow. White noise exposure was strongly aversive to the rats: It increased behavioral indicators of fear, and also heightened the rat's arousal (e.g. locomotory activity). Fox odor increased the presence in the burrow and reduced social interactions. The effect of music is difficult to interpret. Considering the decrease in exploratory, sexual, anti-social and prosocial behavior, the most cautious conclusion is that music lowered rats' arousal. The present results overall confirmed our predictions on the effect of positive and aversive stimuli on rat's behavior. These stimuli were able to elicit different levels of arousal and to modify classical indices of fear and anxiety, thus they are relevant for the investigation of the ERs role in safe vs. threatening contexts. In addition, we observed the first 7.5 min following the end of white noise. We expected that disrupting estrogen actions in the VMN or the CeA would influence the structure of behavioral recovery from white noise, and notably of behaviors specific to this stimulus. The post-white noise interval analyzed here showed that white noise specific behavior "huddling", and open area exploration returned to or approached baseline levels. This observation suggests that recovery from even a strongly aversive stimulus is rather quick. Therefore, the 50 min interval applied between the stimuli should be sufficient to avoid overlapping effects. Interestingly, the cooccurrence analyses of the post-white noise period confirmed the association between white noise and risk assessment in the AAV-ERβ group. Estrogens receptors in the CeA regulate arousal and anxiety levels Knockdown of the ERα in the CeA did not produce any observable effect. This was not unexpected, considering the few ERα receptors present in that area [20,28]. To the contrary, reduced expression of the ERβ in this structure enhanced risk assessment duration. In addition, in the co-occurrence analysis AAV-ERβ was associated with risk assessment display during exposure to lavender odor and music, and also in the minutes following white noise offset. In many of the standard tests for fear and anxiety, a similar behavior pattern is considered an exquisite indicator of the subject's level of anxiety [62][63][64]. Thus, the females with few ERβ receptors in the CeA showed enhanced anxiety. The reduced eating frequency is compatible with elevated anxiety levels. These observations clearly suggest that stimulation of this receptor at this site has anxiolytic actions. In the co-occurrence analysis, only during exposure to the strongly aversive white noise, AAV-ERα and AAV-ERβ appeared in the same cluster. A possible explanation is that this fear-inducing stimulus masked the anxiolytic effects of ERβ, which were more apparent during less aversive stimuli. We suggest that at least some of the anxiolytic actions of systemically administered ERβ agonists are localized to the CeA. In addition, AAV-ERβ increased sniffing the floor during all emotion-inducing stimuli. In the co-occurrence analysis, at each stimulus AAV-ERβ was associated with environmental exploration (sniffing floor and nozzles). It was also associated with chocolate investigation. All these behaviors are characteristic of increased arousal as operationally defined by Pfaff et al. [65]. A different question is whether estrogens, acting on the ERβ in the CeA, participates in the physiological regulation of fear and anxiety responses. There is little evidence for enhanced blood concentration of estrogens in stress-or fear-inducing contexts. In fact, foot shock or chronic mild stress have been reported to either leave blood estrogen concentrations unchanged [66] or to produce a small decrease [67,68]. Thus if estrogens would modulate the acute effects of stressors, it would be necessary to assume local synthesis of the steroid. Neurons in the amygdala express aromatase [69][70][71], making it possible to propose that estrogens indeed may be locally synthetized. There is actually some evidence showing that stressful events (foot shock) enhance the concentration of estradiol in the amygdala of female rats, without any concomitant change in plasma testosterone or estradiol [72]. These observations suggest enhanced local estrogen synthesis in the amygdala in response to stress. Furthermore, since the availability of the substrate for aromatase, testosterone, does not increase [72], de novo steroid synthesis must be required. Unfortunately, none of the studies mentioned above distinguished between the different amygdaloid nuclei, but it may be assumed that also the CeA expresses aromatase, and that the stress-induced increase in aromatase expression and estrogen concentration also occur within this structure. If these speculations are correct, then activation of the ERβ in the CeA would attenuate the response to fear-inducing stimuli, and reduced expression of this receptor would enhance these responses, exactly as occurred in the present study. It must also be mentioned that many rapid actions of the ERβ have been described [73,74], making it possible for local synthesis to have almost immediate behavioral effects. In this context it may be interesting to note that rats in proestrus and estrus show reduced anxiety on the elevated plus maze [75] and in the Vogel conflict procedure [76] as well as in the light-dark, social interaction and defensive burying tests [77]. A similar variation during the estrus cycle has been reported in mice [78]. However, ERβ knockout mice do not show this variation [79]. It appears, then, that the ERβ mediates the estrus cycle-associated variations in response to threatening situations, at least in mice. The specific role of the ERβ in the central amygdala has not been evaluated, but it is known that local injections of estradiol into the amygdala have anxiolytic effects [80]. Site-specific knockdown of the ERβ in cycling females could provide the data necessary for determining the role of the CeA in the variations in anxiety responses during the estrus cycle. Finally, silencing either the ERα or the ERβ in the CeA had no influence on behavioral recovery after white noise exposure. Nevertheless, in the co-occurrence analysis, AAV-ERα, AAV-ERβ and AAV-luc appeared in distinct clusters. AAV-ERα was associated with exploratory behaviors, while AAV-ERβ appeared together with antisocial behaviors and risk assessment. Interestingly, the AAV-luc group was associated with sexual behaviors and resting. It is difficult to give a meaning to this observation. Perhaps ERs are differentially involved in responses to an aversive stimulus and recovery from these responses after the end of the stimulus. Estrogens receptors in the VMN regulate sexual behaviors and possibly fear-related behaviors The reduction in the number of ERα in the VMN was characterized by a diminution in sexual behaviors. The females were less likely to display lordosis. This is consistent with previous findings [7,8]. In addition, AAV-ERα was regularly associated with rejection in the co-occurrence analyses. The fact that lordosis was not entirely suppressed despite the strong reduction observed in the number of ERα (94%) could be due to a slightly too dorsal injection of the AAV in the VMN. Indeed, lordosis is mediated specifically by ERs in the ventro-lateral area of the VMN [81]. In the present study, we followed the usual procedure by counting the number of receptors in the entire VMN [7]. However, it appeared that about half of our rats infused with the shRNA directed against ERα in the VMN had the infusion cannulae tips located in the dorsal part of the nucleus. This could account for the fact that some females in the AAV-ERα group displayed lordosis. Silencing of the ERα in the VMN seems to have anxiolytic properties. First, the behavior huddling during white noise exposure was suppressed by AAV-ERα. Rats seek social interaction in aversive situations to lower manifestations of fear, a phenomenon called social buffering [82]. Anxiolytic treatment has been found to decreased the need for social buffering [82,83]. Therefore, the decrease in huddling, but not hiding alone during aversive white noise, could be interpreted as an anxiolytic effect. After the offset of white noise, females treated with AAV-ERα huddled for a shorter time than the controls, while females treated with AAV-ERβ were not different. This suggests that silencing of the ERα is responsible for this anxiolytic action. Second, the frequent association of AAV-ERα with rearing, a novelty-induced behavior [84], could also be interpreted as decreased anxiety [85,86]. If silencing the ERα in the VMN leads to reduced manifestation of anxiety-related behaviors in an aversive context, it can be concluded that this receptor is anxiogenic in such contexts. This is exactly what was proposed some years ago [87]. Conclusion The main findings of this experiment were that the ERβ in the CeA is anxiolytic in several emotion-inducing contexts. To the contrary, in the VMN ERα appears to be anxiogenic in aversive contexts, while silencing ERβ had no effect. We have previously argued that a seminatural environment has an external validity far superior to that of standard test procedures [43,88]. Consequently, we dare to propose that the effects observed here are manifestations of the importance of the ERs in rats' natural response to emotion-inducing stimuli. Finally, present data points to the CeA as a structure with an essential role in estrogens' emotion-modulating actions. Whether these observations are relevant or not for understanding the sexual dimorphisms in the prevalence of some psychiatric disorders in the human is impossible to determine at present. Source of funding Faculty of Health Sciences, University of Tromsø. Conflicts of interest None declared.
2019-03-28T14:04:29.019Z
2019-03-27T00:00:00.000
{ "year": 2019, "sha1": "c73e30473e89ddc463093d6ff4358ca69a16a290", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bbr.2019.03.045", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ad611bc592721ea9b936f357507346006034c378", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
2316417
pes2o/s2orc
v3-fos-license
Dynamin-2 Stabilizes the HIV-1 Fusion Pore with a Low Oligomeric State Summary One of the key research areas surrounding HIV-1 concerns the regulation of the fusion event that occurs between the virus particle and the host cell during entry. Even if it is universally accepted that the large GTPase dynamin-2 is important during HIV-1 entry, its exact role during the first steps of HIV-1 infection is not well characterized. Here, we have utilized a multidisciplinary approach to study the DNM2 role during fusion of HIV-1 in primary resting CD4 T and TZM-bl cells. We have combined advanced light microscopy and functional cell-based assays to experimentally assess the role of dynamin-2 during these processes. Overall, our data suggest that dynamin-2, as a tetramer, might help to establish hemi-fusion and stabilizes the pore during HIV-1 fusion. INTRODUCTION One of the key research areas surrounding HIV-1 concerns the regulation of the fusion event that occurs between the virus particle and the host cell during entry. HIV-1 fusion is initiated when conformational alterations to the viral gp120-gp41 envelope proteins occur following binding of the virus to its receptor (CD4) and co-receptor (either CCR5 or CXCR4) (Doms and Trono, 2000), resulting in the release of the viral core into the cytoplasm. Several reports have presented evidence to indicate that HIV-1 fuses directly at the cell membrane in SupT1-R5, CEM-ss and primary CD4 T Cells (Herold et al., 2014). Plasma membrane fusion (Wu and Yoder, 2009) presents a completely different set of challenges for incoming virus particles compared to those entering by post-endocytic fusion (de la Vega et al., 2011;Miyauchi et al., 2009a). For example, fusion events occurring at the plasma membrane mean that incoming particles inevitably encounter an intact cortical actin cytoskeleton, which constitutes a physical barrier that must be overcome for successful infection to occur. As an alternative to plasma membrane fusion, clathrin-mediated endocytosis (CME) allows viruses to cross the cell plasma membrane harbored within endocytic vesicles, followed by a fusion event between the membranes of the virus and the endosome. This process requires precise signaling events to not only initiate the process, but to ensure that fusion occurs prior to degradation of the virus particle within the increasingly toxic environment of the endolysosomal machinery (Stein et al., 1987). Irrespective of the entry method utilized, it is clear that both the actin rearrangement and dynamin-2 (DNM2) activity are required for successful viral infection to occur (Barrero-Villar et al., 2009;Gordó n-Alonso et al., 2013). Interestingly, while several reports clearly show the relevance of DNM2 in HIV-1 fusion (Miyauchi et al., 2009a;Pritschet et al., 2012;Sloan et al., 2013), its exact role during virus entry is yet to be clarified. One of the primary roles of DNM2 is to pinch forming endocytic vesicles from the plasma membrane to yield an endosome during CME (Ferguson and De Camilli, 2012). Thus, the involvement of DNM2 in HIV-1 fusion is incompletely understood since recent evidence indicates that in primary CD4 T cells the virus fuses directly at the plasma membrane and not from within endosomes (Herold et al., 2014), meaning the importance of DNM2 in HIV-1 fusion may be distinct from its role in CME. Here, we have combined advanced light microscopy with cell-based functional assays to recover HIV-1 fusion kinetics for reporter cell lines (TZM-bl) and primary resting CD4 T cells (CXCR4-tropic HXB2) isolated from healthy individuals. Interestingly, the addition of dynasore (a DNM2 inhibitor) at partially inhibitory concentrations (Chou et al., 2014) delayed HIV-1 fusion kinetics in primary CD4 T cells. In addition, we performed fluorescence lifetime imaging microscopy (FLIM) and number and brightness combined with total internal reflection fluorescence microscopy (TIRFM) experiments to ascertain the oligomeric state of DNM2 during HIV-1 fusion. We found that DNM2 adopted a low oligomeric state (a tetramer) when reporter cells (TZM-bl) were exposed to virions with HIV-1 JR-FL envelope proteins. By contrast, cells exposed to HIV-1 virions displaying VSV-G envelope proteins (Env) exhibited higher oligomeric DNM2 states (hexamers and octamers). These data supported insights gained from cell-cell fusion experiments where fusion was delayed by 3-4 min between target cells expressing CD4 and co-receptor (CCR5), and effector cells expressing the HIV-1 envelope were exposed to high concentrations of dynasore. Moreover, we observed flickering of the fusion pore in HIV-1-driven cell-cell fusion experiments when non-inhibitory concentrations of dynasore were used. Collectively, our results suggest that DNM2 might play a critical role inducing hemi-fusion and HIV pore stabilization; probably with a low oligomeric state during fusion pore expansion and dilation within the plasma membrane. Dynasore Inhibits HIV-1 Fusion in Both Reporter TZM-bl Cells and CD4 T Cells We tested different concentrations of dynasore assessing HIV HXB2 fusion in resting CD4 T cells employing the real-time beta-lactamase assay (BlaM) (Jones and Padilla-Parra, 2016) that measures viral fusion. Briefly, a virion packaging Vpr-BlaM is liberated into the cytoplasm of a target cell and then a Fö rster resonance energy transfer (FRET) substrate (CCF2) is cleaved and the fluorescence profile altered ( Figure 1A). The range of concentrations used in our titration experiments (5, 20, and 80 mM) ( Figure 1B) did not affect cell viability, as we detected no propidium iodide (PI) positive cells ( Figure S1) under these conditions. Previous reports have shown that the HIV envelope (in this case HXB2), but not the VSV-G protein is capable of mediating HIV infection of resting CD4 T cells (Agosto et al., 2009). Here, we also show that the VSV-G Env was unable to mediate endosomal fusion ( Figure 1C) in resting CD4 T cells. Willing to take a validated model for DNM2-dependent virion endocytosis and fusion, we employed TZM-bl reporter cells previously reported to allow endosomal fusion (Jones and Padilla-Parra, 2016;Miyauchi et al., 2009aMiyauchi et al., , 2009b. HIV VSV-G was able to fuse in TZM-bl cells and is a well characterized virion that fuses within endosomes and is pH dependent (Johannsdottir et al., 2009). Using an endpoint BlaM assay (Zlokarnik et al., 1998), we monitored and compared the impact of different concentrations of dynasore in fusion for both HIV VSV-G and HIV JRFL in TZM-bl cells. Higher concentrations of dynasore were required (A) Cartoon depicting the BlaM assay. Upon virion fusion and capsid release, the Vpr-BlaM chimera recognizes a FRET reporter (CCF2) that changes color (green to blue) upon cleavage. (B) Real-time BlaM was applied using HIV-1 virions packaging the Vpr-b-Lactamase chimera and pseudotyped with HXB2 Env on primary CD4 T cells at different concentrations of dynasore: 0 mM (green dots), 5 mM (black dots), 20 mM (gray dots), and 80 mM (white dots). The proportion of fusion positive cells versus total number of cells is shown (y axis) versus time, in min (x axis). (C) HIV-1/Vpr-b-Lactamase virions pseudotyped with VSVG turned out not to be fusogenic (red dots) showing the same behavior as HIV1/Vpr-b-Lactamase bald particles (without Env, black dots). (D) HIV-1 virions packaging the Vpr-b-Lactamase chimera and pseudotyped with either VSV-G (cyan dots) or JR-FL (orange dots) were exposed to TZM-bl cells with different concentrations of dynasore (0,100,180,260,340, and 400 mM) and endpoint BlaM (as defined in Experimental Procedures) was applied. Higher concentrations of dynasore were required to fully inhibit HIV JRFL (240 mM) as compared with HIV VSVG (180 mM). to fully inhibit fusion for HIV JRFL (250 mM) as compared to HIV VSV-G (180 mM) ( Figure 1D), suggesting that the role of DNM2 in HIV JRFL fusion may be unrelated to endocytosis also in TZM-bl reporter cells. Of note, we performed several experiments to validate the use of high concentrations of dynasore on live cells ( Figure S1), as it was shown that dynamin inhibitors might have off target effects related to membrane ruffling (Park et al., 2013). We therefore quantitatively studied the impact of dynasore on ruffling and the actin cytoskeleton through the use of FRET Raichu biosensors ( Figure S1). We also avoided spinoculation, as we found that this technique might disrupt the regulation of the actin cytoskeleton, as evidenced by changes in small GTPase activity ( Figure S1) with a likely knockon effect on endocytosis (Ferguson and De Camilli, 2012). We found that higher dynasore concentrations (250 mM) were needed to arrest full HIV-1 fusion as compared to others (Miyauchi et al., 2009a;de la Vega et al., 2011) (80 mM and 160 mM, respectively). As stressed by de la Vega et al. (2011), it is possible that the dynasore preparation might affect the rate of scape of HIV-1, although dynasore treatment reproducibility blocked HIV-1 endocytosis and fusion in their experiments and ours. This is the reason why we have titrated dynasore (and all drugs employed in our study) while performing cell-viability experiments. To better understand the role of DNM2 in HIV-1 entry, we performed time-of-addition BlaM (Jones and Padilla-Parra, 2016) using either HIV VSV-G or HIV JRFL in reporter TZM-bl cells. We compared the effect of dynasore with known fusion inhibitors known to disrupt surface accessible viruses (TAK 779 and T20) and universal inhibitors to block fusion for both virions HIV VSV-G or HIV JRFL , NH 4 Cl, and temperature block, respectively (Miyauchi et al., 2009a). When treating TZM-bl cells with HIV VSV-G using fully inhibitory concentrations of dynasore (i.e., 400 mM), temperature block, and 80 mM NH 4 Cl, a lysosmotropic agent that raises the endosomal pH and therefore inhibits fusion, we found similar fusion kinetics with t 1/2 $30 min ( Figure 1E). This result suggests that, as expected, HIV VSV-G enters the cell via dynamindependent endocytosis and the universal inhibitors (NH 4 Cl and temperature block) behave similarly to the specific inhibitor dynasore. Therefore, HIV VSV-G fusion can be completely blocked by inhibiting endocytic pathways. Different fusion inhibitors (point/specific and universal inhibitors) were also utilized when assessing the role of DNM2 in HIV JRFL entry kinetics on TZM-bl cells ( Figure 1F). We titrated TAK 779, a small-molecule CCR5 antagonist ( Figure S1), and Enfuvirtide (T20), a known fusion inhibitor that blocks the formation of the 6-helix bundle formation ( Figure S1), in order to use fully inhibitory concentrations for our time-of-addition BlaM. When plotting together the HIV JRFL fusion kinetics for dynasore, TAK 779, T20, as well as experiments where fusion was inhibited by temperature block (reduction from 37 C to 4 C; Figure 1E), we saw that similar fusion kinetics were obtained for dynasore and TAK779 (specific inhibitors) with similar t 1/2 = 30 min. Fusion kinetics recovered for T20 and temperature block (universal inhibitors) were also very similar, but both delayed $20 min relative to dynasore and TAK77 with t 1/2 = 50 min. We reasoned that dynasore and TAK 779 must act just prior to fusion, while T20 and the temperature block (universal inhibitor of both endocytosis and fusion) occur right at the moment of fusion pore formation. This result suggests a different role for DNM2 for HIV JRFL as opposed to HIV VSV-G . as DNM2 seems to act right before full fusion, almost synchronously with HIV Env/ CD4-CCR5 interaction. Of note, we also tested NH 4 Cl inhibition on HIVJRFL, but as expected it was not able to arrest fusion ( Figure S1). DNM2 Interactions Are Different for HIV VSV-G and HIV JRFL Recently, a report showed the importance of using FLIM to follow DNM2 activity in live cells in relation with its role regulating actin dynamics (Gu et al., 2014). We therefore applied FLIM to follow DNM2 interactions in live cells in the context of virus entry and fusion (Figures 2A and 2B). TZM-bl cells co-transfected with DNM2 labeled with either eGFP (Dyn-GFP) or mCherry (Dyn-mCherry) were exposed to HIV JRFL or HIV VSV-G at high MOIs (10). As a negative control, viruses at the same MOI were also added to TZM-bl cells co-transfected with Dyn-GFP and mCherry alone. A shortening of the average lifetime due to FRET was observed for TZM-bl cells co-transfected with Dyn-GFP and Dyn-mCherry being treated with HIV JRFL or HIV VSV-G (p = 0.02 and p < 0.001, respectively), indicating DNM2 interacting (Gu et al., 2014). Importantly, HIV VSV-G exposure resulted in a drastic lifetime diminution (average < t > = 1.78 ± 0.07 ns, n = 10 as compared to the control < t > = 2.16 ± 0.09 ns, n = 18), whereas HIV JRFL exposure produced only a slight, but significant lifetime diminution (average < t > = 2.02 ± 0.07 ns, n = 14) when compared to the negative controls. These data suggest that the VSV-G envelope protein-and to a lesser extent that of JRFLprovoked DNM2 to interact, albeit to different extents ( Figure 2B). It is therefore possible that DNM2 plays distinct roles in the entry mechanisms of HIV VSV and HIV JRFL . Of note, the distribution of endocytic markers (early and mature endosomes, Rab5-mCherry) did not change upon addition of HIV virions ( Figure S2). HIV-1 Entry and Fusion Require a DNM2 Low Oligomeric State In a previous report (Ross et al., 2011), the oligomeric state of DNM2 close to the plasma membrane was investigated by combining TIRFM with number and brightness analysis (Unruh and Gratton, 2008). When combining TIRF with number and brightness utilizing a very fast image acquisition (i.e., 50 ms/frame), it is possible to quantify the oligomeric state of Dynamin if the dwell time (image acquisition) is less than that of the diffusion being investigated. Number and brightness analysis provides quantitative information regarding the oligomeric state on a pixel by pixel basis. Since we had seen changes in lifetime that relate with protein-protein interactions of DNM2 in the previous FRET-FLIM experiment, we further investigated this finding by expressing Dyn-mCherry in TZM-bl cells before exposing them to either HIV JRFL or HIV VSV-G and performing TIRF/number and brightness microscopy ( Figure 3). We also show that virions are able to get underneath the cells using labeled virions (HIV JRFL Gag-GFP) and TZM-bl cells expressing Dyn-mCherry ( Figure S3). The addition of HIV VSV-G at a MOI of ten induced the formation of higher oligomeric states (octamers, red pixels in the N and B figures, Figure 3), suggesting that the scission of CCPs during CME may be conducted by dynamin octamers (n = 14). Conversely, the addition of HIV JRFL at the same MOI had no noticeable effect on the oligomeric state of Dyn-mCherry (tetramers, identified in Figure 3, n = 14). Thus, HIV entry appears not to require higher-order Dynamin structures in TZM-bl cells. Dynamin-2 Stabilizes the Fusion Pore during HIV Fusion Fusion between individual HEK293T effector cells expressing the JRFL envelope and cytosolic eGFP and target TZM-bl reporter cells expressing mCherry was studied using real-time fluorescence microscopy ( Figure 4). Effector HEK293T cells were allowed to sediment on target cells at 4 C for 30 min (as described in Experimental Procedures), sufficient time to allow receptor priming (Padilla-Parra et al., 2013). Subsequently, the sample was mounted on an inverted microscope and the temperature shifted to 37 C in order to allow cell-cell fusion to occur ( Figure 4A). The formation of fusion pores and the kinetics of fusion were assessed by the transfer of eGFP from the effector cells toward the target cells that mirrored the mCherry transfer of target cells toward effector cells ( Figures 4B-4D). Both effector and target cells became yellow when equilibrium in fusion pore dynamics was established ( Figures 4B-4D). Changes in the mean fluorescence intensity of the target (red signal) and the effector cells (green signal) were plotted ( Figure 4E, left). When cells were treated with high concentrations of Dynasore (400 mM) flickering of the pore was observed ( Figure 4E, middle), indicating that the fusion pore was not stable under these conditions (Padilla-Parra et al., 2012). There is a slight possibility that the pore closure, measured as stabilization of the GFP and concomitant mCherry transfer from effector and target cells (pink zone in Figure 4E, middle), comes from several pores simultaneously, but they should have to be totally synchronized, as opening and closure at different times would never be able to arrest fusion over 2 min (horizontal lines for time-dependent intensities of GFP and mCherry in the pink zone for flickering). In all cases, delayed pore formation and pore closing was observed for TZM-bl cells treated with dynasore, suggesting that DNM2 plays an important role in establishing and stabilizing the HIV-1 fusion pore. When plotting the cumulative distribution of individual fusion events coming from three independent experiments, a delay in $3 min was observed for the TZM-bl cells treated with dynasore relative to the untreated ones ( Figure 4E, right). The average t 1/2 for cell-cell (JR-FL) fusion events without dynasore treatment was 2.83 ± 1.69 min (n = 17); the average cell-cell fusion event for dynasore treated cells was delayed when taking into consideration the initial (pore opening) and final points (equilibrium) 4.9 ± 1.8 min (n = 20). Inhibitory concentrations for single virus fusion in the presence of T20 or TAK 779 (Ayouba et al., 2008), known HIV-1 fusion inhibitors that block fusion and receptor engagement, respectively, were able to arrest fusion ( Figure S4), providing a robust negative control for the cell-cell fusion approach. In contrast, when cell-cell fusion was studied using effector HEK293T cells expressing the VSV-G envelope and TZM-bl cells expressing mCherry, no change in pore formation or kinetics of individual events was observed when dynasore was present ( Figure 4F). Cell-cell fusion constitutes a unique approach to study fusion in the absence of endocytosis and showed that DNM2 is needed to establish and maintain the fusion pore ( Figure S4) only in HIV-1 and not VSV, suggesting that there must be a regulation step in HIV-DNM2 dependent fusion; as DNM2 is recruited toward the fusion pore perhaps through HIV-1/CD4 and co-receptor interactions. Dynamin-2 Co-localizes with Double Labeled HIV-1 Virions prior to Fusion In order to test whether DNM2 is recruited in the inner plasma membrane toward primed HIV JRFL virions prior fusion, we imaged TZM-bl cells expressing DNM2-mCherry in the presence of double labeled virions HIV JRFL (DiD/Gag-GFP). The virions were allowed to prime CD4 receptors in TZM-bl cells for 30 min at 4 C. Again, spinoculation protocols were not applied to avoid unwanted side effects in DNM2 regulation. The cells were imaged under the microscope and micrographs acquired both in X-Y and X-Z directions in a confocal microscope as explained in Experimental Procedures. Co-localization analysis in both planes revealed that 75% of the double labeled particles analyzed co-localized with DNM2 before fusion ( Figure 5), as both the envelope (DiD labeled) and the core (GFP-Gag) co-localized with DNM-2-mCherry. We examined the spatial overlap between the intensity profiles for DNM2-mCherry and DiD/Gag-GFP that was above 80% in all cases positive for co-localization in both directions X-Y and X-Z, n = 24 from three independent experiments (Figures 5C and 5D). These results suggest that DNM2 recruitment happens prior to fusion. We also tested the dominant-negative mutant DNM2-K44A in the context of HIV-1 fusion and found that it was not able to fully , and Dynamin-mCherry treated with HIV JRFL (fourth row) were imaged using TIRF (as described in Experimental Procedures). The average intensity images (first column from the left, gray micrographs) are shown together with the brightness images (second column from the left, rainbow pseudocolor), and the graph plotting brightness (counts per second per molecule) versus intensity (arbitrary units) for all pixels is also shown (third column from the left). The high oligomeric states are seen in cells treated with HIV VSVG (red pixels with high brightness, warm colors), and the lower oligomeric states comparable to Dynamin-mCherry without treatment were seen in cells exposed to HIV JRFL. In both cases, the cells were treated with MOI = 10. The size of the micrographs is 25.6 3 25.6 mm. (B) The average maximum oligomeric state detected per cell is plotted for three different conditions: TZM-bl cells expressing Dynamin-mCherry (first column), TZM-bl cells expressing Dynamin-mCherry treated with HIV JRFL (second column), and TZM-bl cells expressing Dynamin-mCherry treated with HIV VSVG . The higher oligomeric states where detected taking as a reference the brightness recovered from mCherry alone (monomers) expressed in TZM-bl cells and calibrating the S factor of the EM-CCD camera as explained in Experimental Procedures. Only cells treated with HIV VSVG systematically showed higher oligomeric states right after addition of the virions, indicating high CME endocytic activity. block HIV JRFL fusion ( Figure S5). These data coincide with Herold et al. (2014) and supports the idea of DNM2 having a low oligomeric state as DNM2 GTPase activity relates with high oligomeric states (Ferguson and De Camilli, 2012). DISCUSSION The mode of entry for HIV-1 was thoroughly investigated in a recent report (Herold et al., 2014), where the authors determined that HIV-1 must fuse in the plasma membrane and that HIV-1 does not require endocytosis to complete fusion. This view, however, is opposed to that of Miyauchi et al. (2009a), who postulated that HIV-1 has to undergo exclusively endosomal fusion based on data from real-time single virus tracking combined with BlaM assays. We suspect that this controversy in the field debating whether or not HIV-1 gets inside the cell through endocytosis (Marin and Melikyan, 2015) has deviated the attention from the actual role of DNM2 during HIV-1 fusion. Nevertheless, there is growing interest in the field to understand the role of actin dynamics in HIV infection: a recent report (Mé nager and Littman, 2016) points at the importance of DNM2 in dendritic cells mediated trans-enhancement of CD4 T cell infection by HIV in vitro. In this scenario, insights about the true role of DNM2 during single virus fusion are needed to (E) The fluorescence intensities were recovered as a function of time integrating both signals (red and green) coming from two single events from target cells in the absence of dynasore (left) and in the presence of 400 mM dynasore (middle). The flickering of the fusion pore is only observed in cells treated with dynasore. The cumulative distribution of individual cell-cell fusion events comparing untreated cells (green dots, n = 17) against dynasore treated cells (small red dots, n = 20) is shown in the right image, evidencing a delay of around 3 min for cells treated with DNM2 inhibitor dynasore. (F) HEK293T cells expressing freely diffusing GFPs and VSVG Env (effector cells) were added onto TZM-bl reporter cells expressing freely diffusing mCherry (target cells) at room temperature for 30 min. Shifting the pH using a citrate buffer at pH $5 permitted us to visualize VSVG Env mediated cell-cell fusion, measured by time-resolved two color confocal fluorescence microscopy. The left image shows a representative example without dynasore treatment, and the middle image being an example of cell-cell fusion treated with 400 mM dynasore. Flickering of the pore was never observed in this case. The cumulative distribution of individual cell-cell fusion events comparing untreated cells (green dots, n = 16) against dynasore treated cells (small red dots, n = 15) is shown in the right image, evidencing synchronous fusion kinetics. fully understand the mechanisms taking place (Padilla-Parra and Dustin, 2016). Indeed, the process of HIV-1 fusion pore formation and enlargement is an energy-intensive mechanism that necessitates the orchestrated role of several proteins (Munro et al., 2014), among them DNM2. Membrane fusion is vital for eukaryotic live, in this context it has recently been shown the transition to full membrane fusion can be determined by competition between fusion and DNM2-dependent fission mechanisms supporting the hemi-fusion and hemi-fission hypothesis in live cells (Zhao et al., 2016). Our data suggest that DNM2 might play a multifaceted role during HIV-1 entry: first, a low DNM2 oligomeric state (n = 4) might help to induce HIV-1 hemi-fusion (Montessuit et al., 2010) and in turn prevent fission from happening as DNM2 fission depends on the formation of an octamer with a ring like structure and GTPase activity (Mattila et al., 2015). These sequences of events would favor HIV-1 full fusion and second, DNM2 tetramers could concomitantly stabilize the fusion pore right after HIV-1 hemi-fusion ( Figure 6). Here, we show various lines of evidence to support this hypothesis: first, we have shown substantial changes to HIV-1 fusion kinetics when primary CD4 T cells are treated with a low dosage (non-inhibitory concentration) of dynasore (5 mM and 20 mM) (Figure 1). We have also shown that dynasore acts right before fusion synchronously with TAK 779, a CCR5 antagonist (Figure 1) in reporter TZM-bl cells. Second, our quantitative imaging experiments based on FRET-FLIM ( Figure 2) and number and brightness (Figure 3) clearly show a difference in DNM2 activity and oligomeric state when treating the cells with high concentration of either HIV VSVG (high oligomers, octamers) or HIV JRFL (low oligomers, tetramers). Third, cell-cell fusion assays revealed that dynasore could disrupt the formation of the fusion pore between effector cells expressing the HIV-1 Env (JRFL) and target TZM-bl cells, causing flickering of the fusion pore and delayed fusion kinetics (Figure 4). However, we could not fully inhibit fusion with high concentrations of dynasore. Importantly, the dominant-negative mutant DNM2-K44A was not able to fully block HIV JRFL fusion (Figure S5). This mutant blocks DNM2 GTPase activity that in turn is related to its oligomeric state (Ferguson and De Camilli, 2012), reinforcing the idea that DNM2 acts with a low oligomeric state during HIV-1 entry and fusion and also that its role during this process is not related to endocytosis. Moreover, we also show that DNM2 recruitment toward the fusion pore has to be specific ( Figure 5) and regulated, and this suggests that it might be responsible to induce HIV-1 hemi-fusion as a tetramer. This behavior has previously been reported for a Dynamin related protein 1 (Drp1) that promotes tethering and hemi-fusion of membranes in vivo (Montessuit et al., 2010). This DNM2 tetrameric state in turn would be very important since on one hand it is unable to complete fission (Ferguson and De Camilli, 2012) and on the other it induces full fusion and pore stabilization ( Figure 5). We hypothesize that DNM2 might be regulated by engagement of CD4 and co-receptor interactions either through a retroactive loop with actin as suggested in Taylor et al. (2012) and/or through a BAR domain protein able to sense curvature (Gonzá lez-Jamett et al., 2013). Overall our data suggest that DNM2, as a tetramer, might help to establish hemi-fusion, might inhibit fission, and does stabilize the pore during HIV-1 fusion. EXPERIMENTAL PROCEDURES Plasmids pR8DEnv (encoding the HIV-1 genome harboring a deletion within Env), pcRev, Gag-GFP, H1N1, and VSV-G were kindly provided by Greg Melikyan (Emory University). The plasmid encoding the JR-FL envelope protein was a kind gift from James Binley (Torrey Pines Institute for Molecular Studies). Dynamin-EGFP and Dynamin-mCherry where obtained from Addgene. Cell Culture HEK293T cells and TZM-bl cells were grown using DMEM (Life Technologies) supplemented with 10% fetal bovine serum, 1% penicillin-streptomycin, and 1% L-Glutamine to give DMEM complete (DMEM comp ). All cells were maintained in a 37 C incubator supplied with 5% CO 2 . Cell Purification Leukoreduction chambers from healthy individuals were obtained from the National Blood Service. CD4+ T cells were purified from the peripheral blood of healthy human donors. Blood was incubated (20 min, 25 C) with RosetteSep human CD4+T cell enrichment cocktail (StemCell Technologies). The remaining unsedimented cells were loaded onto Ficoll-Paque Plus (Sigma-Aldrich), isolated by density centrifugation, and washed with PBS. The purified cells were cultured in RPMI containing antibiotics and 10% heat-inactivated FBS. De-identified leukoreduction chambers were obtained from the Oxford Radcliffe Biobank, which operates under UK Human Tissue Authority license number 12217. Virus Production Pseudotyped viral particles were produced by transfecting HEK293T cells plated at $60%-70% confluency in T75 or T175 flasks. DNA components were transfected using GeneJuice (Novagen) in accordance with the manufacturer's instructions. To produce particles harboring the BlaM protease, cells were transfected with 2 mg pR8DEnv, 2 mg Vpr-BlaM, 1 mg pcREV, and 3 mg of the appropriate viral envelope (either VSV-G or the CCR5-tropic HIV-1 strain JR-FL or the CXCR4tropic HXB2). For viruses harboring Gag-GFP, 3 mg of the Gag-GFP plasmid were used. Transfection mixtures were then added to cells in DMEM comp before returning flasks to the 37 C CO 2 incubator. At 12 hr post-transfection, the transfection mixture-containing medium was removed and cells were washed with PBS. Fresh DMEM comp (lacking phenol red) was then added. Cells were subsequently incubated for a further 24 hr. At 48 hr post-transfection, viral supernatants were removed from cells and pushed through a 0.45 mm syringe filter (Sartorius Stedim Biotech) before being aliquoted and stored at À80 C. For SVT-compatible virus production, cells were transfected in the same manner with 2 mg pR8DEnv, 3 mg Gag-GFP, 1 mg pcREV, and 3 mg of the appropriate viral envelope (either VSV-G or JR-FL). At 12 hr post-transfection, the transfection complexes were removed and cells were washed with PBS before being incubated at 37 C with 10 mL Opti-MEM (Life Technologies) containing 10 mM DiD (Life Technologies) for 4 hr. Subsequently, the staining mixture was removed, cells washed twice with PBS, and fresh DMEM comp (lacking phenol red) was then added. Cells were incubated for a further 24 hr prior to harvesting. BlaM Assay At 24 hr prior to the assay, TZM-bl cells were plated at 4 3 10 4 cells/well in black clear-bottomed 96 well plates. On the day of assay, cells were cooled on ice prior to the addition of the appropriate MOI of virus (all infections were performed in 100 mL volumes). Immediately following addition of virus harboring Vpr-BlaM, cells were placed at 4 C for 1 hr. Virus was then removed and cells were washed with PBS and 100 mL of DMEM comp was added to each well before shifting the plate to the 37 C CO 2 incubator to initiate viral entry. To gain kinetic data, virus fusion was blocked at the appropriate time point (0, 15, 30, 45, 60, 75, and 90 mins) by removing the media and replacing with media containing Dynasore, TAK 779, NH 4 Cl, or T20 (Sigma-Aldrich). The inhibitor concentrations were found by testing different concentrations in titration experiments (Supplemental Information). Note that for the 0 min time point, drugs were added immediately prior to the 37 C temperature shift. After 90 mins, cells were loaded with CCF2-AM from the LiveBLAzer FRET B/G Loading Kit (Life Technologies) and incubated at room temperature in the dark for 2 hr. Finally, the CCF2 was removed; cells were washed with PBS and fixed with 2% PFA prior to viewing. BlaM Assay Spectral Analysis and Real-Time BlaM TZM-bl cells loaded with CCF2 were excited using a 405 nm continuous laser (Leica) and the emission spectra between 430-560 nm was recorded pixel by pixel (512 3 512) using a Leica SP8 X-SMD microscope with a lambda resolution of 12 nm. The ratio of blue emission (440-480 nm, cleaved CCF2) to green (500-540 nm, uncleaved CCF2) was then calculated pixel by pixel using ImageJ (https://imagej.nih.gov/ij/) for three different observation fields using a 203 objective and plotted as a function of time. Fusion kinetics were then recovered with automated software (R) detecting blue/green ratios coming from individual cells above the threshold given by our negative control (No Env virions packaging Vpr-BlaM). Finally, a new protocol able to retrieve real-time HIV-1 fusion data was applied. Briefly, the real-time-BlaM assay represents a more streamlined approach for measuring virus fusion kinetics. Here, target cells are first loaded with the CCF2-AM in the presence of 12.5 mM probenecid and later exposed to virus particles. This means upon temperature shift to 37 C, cleavage of CCF2-AM and the resultant color change from green to blue can be visualized in real time, all in a single sample of cells/virus and without the need for fusion inhibitor addition. This typically permits the recording of more data sets and produces a more refined kinetic curve as compared to time-of-addition BlaM. Of note, this protocol was also applied on TZM-bl cells, but without success. We found that the CCF2-AM substrate was pumped out more efficiently in these cells even in the presence of probenecid and therefore decided to apply a time-of-addition approach with TZM-bl cells. Fö rster Energy Transfer by Fluorescence Lifetime Imaging Microscopy Living cells expressing Dynamin-EGFP alone or co-expressing Dynamin-EGFP and Dynamin-mCherry were imaged before and after virion addition using a SP8-X-SMD Leica microscope from Leica Microsystems. Areas of interest were chosen under either a 203 air immersion objective or a 633/1.4 NA oil immersion objective. Cells were excited using a 488 nm pulsed laser tuned at 80 MHz coupled with single photon counting electronics (PicoHarp 300) and subsequently detected by hybrid external detectors. To rule out artifacts due to photo-bleaching and insufficient signal to noise, only cells with at least 250-1,000 photons per pixel and negligible amount of bleaching were included in the analysis after a 2 3 2 image binning (Leray et al., 2013;Padilla-Parra et al., 2009). The acquired fluorescence decay of each pixel in one whole cell was deconvoluted with the instrument response function (IRF) and fitted by a Marquandt nonlinear least-square algorithm with one or two-exponential theoretical models using Symphotime software from Picoquant GmbH. The mean fluorescence lifetime (Tau) and fraction of interacting donor (f D ) were calculated as previously described (Leray et al., 2013;Zhao et al., 2014) using SymPhoTime, Mapi software (Leray et al., 2013) and ImageJ (https://imagej. nih.gov/ij/). Statistical analysis of the lifetime data was performed using a two-tailed t test or rank-sum test (SigmaPlot). A mask to filter out the punctate structures based on threshold analysis was applied using ImageJ showing that the overall average lifetimes did not change. TCSPC acquisitions lasted $3 min to accumulate enough photons in order to perform double exponential fits. Importantly, transient interactions or high intensity structures will be exaggerated after accumulating photons during the acquisition times. Total Internal Reflection Microscopy Combined with Number and Brightness Analysis TZM-bl cells were transfected with Dynamin-mCherry and observed in a Zeiss Elyra TIRF microscope equipped with a 1003 oil objective (1.46 NA). Cells were exposed to a 561 nm line (100 mW) and total internal reflection was achieved reaching the critical angle (previously calibrated with lipid-bilayers treated red lipophilic dyes). There were 100 images that were recovered at 256 3 256 pixels setting the EM-CDD (Andor) exposure time at 50 ms per frame. Images were analyzed to recover number and brightness using SimFCS software (Laboratory for Fluorescence Dynamics, University of California at Irvine). In order to avoid for cell movement and moving objects, a running average of ten frames was used to detrend the fluorescence fluctuation and correct for cell movement during the acquisition. A sample with cells expressing mCherry alone was used to calibrate the settings of the system and recover a brightness above 1 for molecular diffusion above immobile structures and detector noise. Cell-Cell Fusion Assays HEK293T cells expressing freely diffusing GFPs and JRFL Env (effector cells) were added onto TZM-bl reporter cells expressing freely diffusing mCherry (target cells) at 4 C for 30 min. Shifting the temperature under the microscope at 37 C permitted to visualize JRFL Env mediated cell-cell fusion, measured by time-resolved two color confocal fluorescence microscopy using a Leica SP8 microscope. A white light laser (WLL) was set at 488 and 588 nm to simultaneously excite GFP and mCherry using a 403 oil immersion objective and the emission light of both fluorescent proteins was recovered with photon counting detectors (HyD) tuned at 500-550 (green channel) and 600-650 (red channel). The pinhole was set at 1.5 Airy units, and we used an automatic adaptive autofocus to prevent z-drifting while imaging (Leica). Leukoreduction chambers were used as a source of human peripheral blood mononuclear cells. The fluorescence intensities were recovered as a function of time integrating both signals (red and green) coming from regions of interest comprising target cells (TZM-bl) in the absence of dynasore and in the presence of 400 mM dynasore using ImageJ free software (https://imagej.nih.gov/ij/). If cells moved during the movies, single cell tracks were recovered using manual tracking (ImageJ). The cumulative distribution of individual cell-cell fusion events was calculated using Sigma Plot. The concentration of T20 (Sigma) used to inhibit cell-cell fusion was 40 mg/mL. Time-resolved single virus tracking with TIRFM was performed on TZM-bl cells expressing DNM2-mCherry (Addgene) that were grown to near confluency on glass-bottom 35 mm Petri dishes (MatTek) in phenol red-free growth medium. Cells were placed at 4 C and HIV JRFL viruses (packaging Gag-GFP) at 1.5 3 10 4 IU were added and allowed to sediment down for $30 min. After that, cells were placed under the TIRF microscope and imaged using a 1003 objective using a 488 nm laser for GFP and 561 for mCherry. 3D Confocal Imaging TZM-bl cells expressing either DNM2-mCherry (Addgene) or Rab5-mCherry were grown to near confluency on glass-bottom 35 mm Petri dishes (MatTek) in phenol red-free growth medium. Cells were placed at 4 C and viruses at 1.5 3 10 4 IU were added and allowed to sediment down for $30 min. After that, cells were placed under the SP8XSMD Leica confocal microscope (Leica Microsystems) and imaged. WLL was set for two different pathways to avoid bleed-through between Gag-GFP, Rab5-mCherry, and DNM2-mCherry: (1)WLL tuned at 488 and 633 nm to simultaneously excite GFP and DiD and (2) WLL tuned at 589 to excite DNM2-mCherry. We used a 633 oil immersion (1.3 NA) objective and the emission light of both fluorescent proteins and DiD were recovered with photon counting detectors (HyD) tuned at 500-550 (green channel), 600-650 (mCherry channel channel), and 640-700 (DiD channel). The pinhole was set at 1 Airy unit, and we used an automatic adaptive autofocus to prevent z-and y-drifting while imaging (Leica). Images were taken in X-Y and X-Z planes. The fluorescence intensity profiles were recovered integrating both pathways: (1) signals (DiD, far-red and Gag-GFP, green) and (2) DNM2-mCherry (shown in blue) coming from lines crossing the equatorial part of double labeled virions using ImageJ free software (https://imagej.nih.gov/ij/). Co-localization was considered to be positive when the overlap between the DNM-mCherry intensity profile was at least 80% with both channels DiD and Gag-GFP.
2018-04-03T05:42:51.062Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "9a4b484124ad74a3660c86d4fa476730c1303021", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2211124716317223/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a4b484124ad74a3660c86d4fa476730c1303021", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222261143
pes2o/s2orc
v3-fos-license
Severe Dizziness and Hypereosinophilia: Coincidence or Complication? A Case Report Hypereosinophilia is a common issue in medicine. One rare cause is myeloproliferative neoplasm with PDGFRA rearrangement. In these patients, the gold standard for therapy is low-dose imatinib. We present the case of a patient with a new diagnosis of myeloproliferative neoplasm following an unconventional diagnostic pattern, which developed clinically relevant unexplained dizziness a week after starting treatment. Our case presented with lower back pain and multiple bone lesions at MRI investigation. Bone marrow and cytogenetic analysis led to the diagnosis of myeloproliferative neoplasm with PDGFRA rearrangement. We started a treatment with a tyrosine kinase inhibitor (imatinib), and the patient noticed an onset of severe, persistent and intense dizziness, which was more intense with closed eyes. Diagnostic tests were not conclusive, and dizziness persisted at 48 months of follow-up. In conclusion, clinically relevant dizziness was never described in patients with myeloproliferative neoplasm. Even if the exact physiopathological mechanism is not clear, clinicians should know that hypereosinophilia could lead to central nervous system damage. Background One rare aetiology of hypereosinophilia is related to a clonal production of eosinophils due to the gene fusion FIP1L1-PDGFRA, as result of deletion at 4q12. This entity is classified as myeloproliferative neoplasm with PDGFRA rearrangement and the common presentation is similar to chronic eosinophilic leukaemia [1]. One important aspect to consider when approaching and treating this syndrome is that eosinophils can secrete many cytokines and other inflammatory factors that could stimulate an important inflammatory response with tissue remodelling provoking potentially irreversible organ damage. The organs most often involved are lungs, heart, gastrointestinal tract, skin and central nervous system [2]. These patients respond very well to low doses of imatinib, a tyrosine kinase inhibitor originally produced for chronic myeloid leukaemia. Low doses of imatinib (100 mg/day) in monotherapy may be adequate to obtain a clinical and molecular response in the majority of patients, while even lower doses can be used as maintenance therapy when a total response is achieved (100 mg/week) [3]. Case Presentation We report a case of a 60-year-old male patient, in good health status, who has been suffering from lower back pain for several years. An MRI investigation showed multiple bone lesions in the vertebral column and, therefore, the patient was referred to an oncologist for further investigation. The blood tests found a high leucocytosis of 22,000 U/L with a prevalence of eosinophilic cells (53%) 11,500 U/L, normal haemoglobin and thrombocytes, and high levels of vitamin B 12 (4,059 pmol/L, range 138-652). We performed a PET scan, which showed an uptake at different vertebral levels (cervical to sacral) being unusual for a metastatic bone disease (Fig. 1A). We then performed a bone scintigraphy, which resulted negative for bone involvement (Fig. 1B); hence, we focussed on a disease which was mainly localized in the bone marrow. We performed a bone marrow biopsy that showed an increased myelopoiesis with some immature forms and a massive infiltration of eosinophils, most of them already mature (Fig. 2). Immunohistochemical analysis was negative. Because of the strong suspicion of chronic eosinophilic leukaemia, we performed a FISH and cytogenetic analysis and found a FIP1L1-PDGFRA fusion gene diagnostic for myeloid neoplasms with eosinophilia and rearrangement of PDGFRA. At our first examination, the patient was asymptomatic, except for the aforementioned back pain, and the clinical status was normal. No splenomegaly nor hepatomegaly was present. CT and PET imaging excluded pathological lymphadenopathies. According to the literature [4], we started a therapy regimen which included imatinib 100 mg/day together with a short-term corticosteroid treatment, not being able to exclude cardiac involvement in a patient with a diagnosis of ischaemic and hypertensive cardiopathy. The patient did not experience immediately adverse events. Seven days after starting the treatment, the patient noticed an onset of severe and intense dizziness, which was more intense with closed eyes. In fact, with closed eyes, the patient lost his balance and was not able to maintain an upright position. The Romberg test was positive with a high risk of fall with closed eyes. The dizziness was described as subjective and rotatory. The symptom was so intense and persistent to limit the patient's quality of life, forcing him to stop his work in the transport field. The patient visited an otolaryngologist and all the tests performed were negative (Rinne test, Weber test, no nystagmus, normal motor coordination). A visit at the Department of Neurology including a cerebral MRI clearly excluded central causes of dizziness; even a neurosurgery exam was normal with no clinical signs of myelopathy and a cervical CT scan showed a degenerative picture consistent with the patient's age. Orthostatic hypotension was excluded with a negative Schellong test and a clinically negative exam (no variceal veins). Remotely, one might also consider adrenal insufficiency, but our patient did not have new related symptoms such as impotence, nightly diarrhoea, pupil anomalies or sweating disorder. We speculated that the dizziness might be related to imatinib intake, even though it has never been reported before. The other possible pathogenesis that should be taken into consideration is a central nervous system damage due to hypereosinophilia. From an oncological point of view, the patient experienced a good response to the current treatment with a clinical response after 2 weeks and complete molecular response after 13 months. Due to the lack of scientific evidence on the length of treatment and the rate of relapse, in agreement with the patient, we did not stop the treatment, despite the persistent dizziness. However, after a few months, dizziness became spontaneously less intense, but the symptoms remained, even if with a lower grade. After 19 months, considering the great outcome and in order to test the true relationship between dizziness and the drug, we decided to decrease the treatment dose: from 100 mg/ day to 300 mg/week and after 27 months, we reduced it to 100 mg/week. At present, the patient is at 48 months of follow-up and has neither a clinical nor molecular relapse of the diseases. Dizziness was reported unchanged even at a lower dose of imatinib. Discussion and Conclusions We therefore cannot conclude that there is a real correlation between the drug intake and the reported persistent and severe dizziness, but the suspicion remains moderate and may be explained with a not dose-related mechanism. We took in the differential diagnosis of orthostatic hypotension, which might lead to the consideration of autonomic neuropathy by either the disease or the drug, and an adrenal insufficiency, but both were excluded due to the absence of related symptoms and the absence of pathological results in clinical and instrumental tests. Moreover, the other differential diagnosis could be a punctiform permanent damage of the central nervous system. However, we suppose that the damage could be too small to be detected at the radiological exams performed. To our knowledge, no case of hypereosinophilia with such symptoms has been described so far. Díaz et al. [5] described a case of cerebellar reversible damage with unstable gait and ataxia after 14 days from the diagnosis of idiopathic hypereosinophilic syndrome. They detected hyperintense, vascular-type lesions on T2-weighted images after performing MRI, which resolved after specific treatment. It is also well known that eosinophils may secrete in particular two neurotoxic proteins called eosinophilic cationic protein and eosinophil-derived neurotoxin. Both proteins can cause central nervous system damage as demonstrated in vitro by Navarro et al. [6] with a dose-related mechanism both in astrocytes and in cerebellar granule cells. They saw that metabolic activity of these cells was reduced when the proteins' concentration raised while the apoptosis has increased. Although our case was different from the one described by Díaz et al. [5] and the MRI lesions were not detectable, the physiopathological mechanism could be similar with a mild involvement of tissue damage at the posterior fossa level due to eosinophil migration to the central nervous system. This could lead to a direct damage related to cytokines and degranulation products secretion, which usually intensify in the first days of therapy with imatinib. We find our case interesting because of its particular presentation and unconventional diagnostic pattern, as well as the development of clinically relevant dizziness, never described before in association with this pathological entity. Even if the exact physiopathological mech-anism is not clear in our case, clinicians should know that hypereosinophilia could lead to central nervous system damage and thus patients could develop related symptoms such as dizziness.
2020-10-11T06:25:35.542Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "f260f754129eadf92c4108af6556100a44f314e1", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/508359", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ffd4f6c0bc7f120bffe6273f50d9a5303d99f72", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247468191
pes2o/s2orc
v3-fos-license
A screening-level human health risk assessment for microplastics and organic contaminants in near-shore marine environments in American Samoa Solid waste disposal is a growing concern among Pacific Island nations. With severe limitations in land area, in combination with the lack of reuse or recycling options, many near-shore marine ecosystems across Oceania are highly impacted by locally derived marine debris, including plastics, microplastics and associated chemical contaminants. In order to catalyze improved solid waste management and plastic use policies, the potential ecological and public health risks must be clearly identified and communicated. Using an ecological risk assessment framework, potential risks to marine ecosystems and human health are explored by quantifying microplastics and organic contaminants in 4 study sites located in Tutuila, American Samoa. Results of sampled near-shore marine waters, marine sediments and molluscs indicate that microplastics are unevenly distributed in the marine environment, with the highest concentrations detected in marine molluscs (e.g. average of 15 and 17 particles per organism, the majority of which were microfibers identified as polyethylene terephthalate). These invertebrates also have the highest environmental concentrations of organic contaminants, including phthalates, pesticides and PCBs. However, based on estimated rates of invertebrate consumption, the risk of adverse impacts to human health are likely to be low. Regardless, future studies are recommended to better understand the environmental partitioning of microplastics in dynamic near-shore marine environments, as well as the specific pathways and consequences of the physical and chemical impacts of microplastics on marine species populations and overall marine ecosystem health. Introduction Over the past decade, an increasing number of studies have documented the presence of marine debris and plastic trash in the world's oceans (Eriksen et al., 2014;Woodall et al., 2014;Jambeck et al., 2015;Borrelle et al., 2020). Micrometer sized plastic particles, including microplastics from the breakdown of plastic debris into smaller pieces and microfibers from synthetic clothing, fishing gear, car tires and other sources, are increasingly documented on beaches, oceans, and in deep-sea sediments (Woodall et al., 2014;Carr 2017;See et al., 2020). Microplastics, and especially microfibers, are of growing global concern as they are easily ingested by a wide variety of marine and freshwater organisms, including fish, invertebrates and microorganisms (Galloway et al., 2017). Ingestion of plastic particles has been shown to have both physical and chemical impacts to organism digestive tracts (Sharma and Chatterjee 2017), as well as in some cases induce immuno-toxicological responses, inhibit growth, alter gene expression and cause cell death (Lithner et al., 2011;Egbeocha et al., 2018;Du et al., 2020). Both plastic debris and micro-sized plastics are comprised of polymers (i.e., polyethylene, polyvinyl chloride, polystyrene, polyurethane, polycarbonate) and other additives such as plasticizers, colorants, flame retardants, resins and anti-oxidants during production. Additionally, plastics have been shown to "sorb" or accumulate additional hydrophobic organic contaminants from the environment, such as organochlorine pesticides, polychlorinated biphenyls (PCBs), poly aromatic hydrocarbons (PAHs), and poly brominated diethers (PBDEs) (Engler 2012;Rochman et al., 2013). As a result, microplastics are sometimes considered a vector for organic pollutant transport into marine organisms via microplastics consumption (Cole et al., 2011;Koelmans et al., 2013;Rocha-Santos and Duarte 2015), which then may pose a threat to human health when organisms are consumed as seafood (Smith et al., 2018). However, the physical, chemical and biological impacts of microplastics on human health are not well-understood (Smith et al., 2018), although some plastic additives, such as phthalates, are known endocrine disrupters that can impact human growth, development and reproductive systems (Wang and Qian 2021). For these reasons, documenting in-situ correlations between microplastic and organic contaminant concentrations in both the environment and marine fauna, is important for refining ecological and human health risk assessments. Across Oceania, solid waste management is one of the most pressing environmental problems for Pacific island nations, given the severe lack of land area, high costs of shipping, and little to no capacity for recycling or repurposing of plastics, chemicals and other wastes (Richards and Haynes, 2014;Mohee et al., 2015). On the main island of Tutuila, American Samoa, marine pollution and marine debris are of increasing concern for territorial regulatory agencies, including the American Samoa Environmental Protection Agency (ASEPA) and the Department of Marine and Wildlife Resources (DMWR). As a consequence, the Marine Debris Program of the American Samoa Environmental Protection Agency has prioritized microplastics monitoring, research and risk assessment within their 2016-2025 strategic objectives. Previous studies (Polidoro et al., 2017), have identified organic contaminants of concern (pesticides, PAHs, PCBs, etc.), marine plastic debris, and microplastics in several near-shore coastal areas on Tutuila. Even though locally caught seafood forms a staple of American Samoan diets, no studies have been conducted to date on the presence and distribution of microplastics and potentially associated organic contaminants in marine species in American Samoa. Similarly, no studies have been conducted on the risks that microplastics and these other contaminants may pose on marine ecosystems and human health. To preliminarily assess risk, we applied a screening-level framework developed by the U.S. EPA, 1998 that allows for rapid identification and prioritization of contaminants that may cause adverse impacts to ecological or human communities. This screening-level risk assessment methodology characterizes risk by calculating a simple hazard quotient for a given combination of an environmental exposure, or contaminant dose, divided by an available, relevant toxicological threshold. In this approach, a number of risk assessment scenarios can be rapidly evaluated by dividing the relevant measured environmental concentration or calculated ingested dose (for human risk assessment), by the selected relevant toxicological threshold of adverse impact (U.S. EPA, 1998). To facilitate this screening process, an Action Level can be calculated for each contaminant detected, based on known toxicological thresholds of adverse impact (for example measured in micrograms of contaminant ingested per kg of human body weight), in order to directly compare concentrations measured in the environment to contaminant levels estimated to cause adverse human health impacts (U.S. EPA, 2000). Our overall objectives were: 1) to document the presence, concentration and environmental behavior of microplastics and potentially associated organic contaminants in intertidal marine waters, marine sediments and molluscs in three study sites in American Samoa, and 2) to apply a risk assessment framework to estimate any potential adverse impacts to human health based on microplastic and organic contaminant concentrations in locally consumed molluscs. Results represent the first human health risk assessment, based on the presence of microplastics and organic contaminants in seafood, conducted in American Samoa. Identification of the type, concentration and potential health risk associated with microplastics and organic contaminants in American Samoa will help local government agencies and citizen groups to prioritize mitigation of polluted areas, to improve chemical management, and to develop seafood consumption advisories where necessary. At a larger scale, the risk assessment approach presented can be applied across the globe, and especially in data poor regions, to help identify and prioritize contaminants of concern, and to assess if selected human populations are at elevated risk of adverse health outcomes. Site characterization The largest island of Tutuila (140 km 2 ) is the center of American Samoa government and business, and supports a population of more than 56,000 residents. The climate is hot and humid throughout the year, with annual rainfall ranging from 3 to 5 m per year. The most pressing environmental concerns include extensive coastal alterations, fishing pressure, loss of wetlands, soil erosion, coastal sedimentation, solid and hazardous waste disposal, and pollution (Craig et al., 2005). Four coastal areas on the island of Tutuila, American Samoa (e.g. inner Nu'uuli Lagoon, Lions Park, Pago Harbor, and Lauli'i Beach) were sampled (Figure 1.), based on prior observations of high solid waste or marine debris content (Polidoro et al., 2017). Nu'uuli lagoon is the largest brackish marine lagoon on Tutuila, and receives freshwater input from at least two small rivers. The area is dominated by mangroves in the inner lagoon area, and the main airport on its outer edge near Lions Park. Sediments are mostly mud and silt, but large pieces of household trash and commercial debris can be found throughout the near-shore and intertidal zone. Traditionally, Nu'uuli lagoon has been an important fishing, clamming and recreational area, but significant increases in pollution and marine debris has essentially prohibited these uses. Pago Harbor serves as the main site for shipping and industry on Tutuila, and at its inner most edge, receives freshwater input from Vaipito stream. Lauli'i Beach is located on the outer, eastern tip of Pago Harbor, and also receives some freshwater input from Lauli'i stream. Sediments here contain more sand, with small patches of coral reef. However, shorelines are also littered with concrete and other types of plastic and marine debris. Sampling design At each of the sites, a 40m to 50m square sampling area was established ( Figure 1) adjacent to the coastline, within the intertidal zone, and in the vicinity of coastal streams that have been observed to be a source of marine debris. Between September 2017 and July 2018, the inner Nu'uuli Lagoon, Pago Harbor and Lauli'i sites were sampled in the same area once a month for 8 months (e.g. in September, October, November, February, March, May, June and July). Each sampling event consisted of collecting two 1-L replicates of seawater in approximately 0.5-0.75m depth around the center of the sampling area in autoclaved 1 L borosilicate glass containers, collecting 6-10 sediment samples of approximately 200g (wet weight) each with a stainless-steel hand shovel at low tide from the top 10 cm of the intertidal zone in an E-W and S-N transect, and opportunistic hand collection of at least 20 bivalves and/or gastropods within the sampling area. During each sampling event, water chemistry (pH, temperature, dissolved oxygen, salinity) was recorded using a multiparameter water meter (HI98914, Hanna Instruments) in the same area where water samples were taken. However, because the invertebrate community in the initial inner Nu'uuli site was observed to be mostly gastropods (Neritina canalis), compared to the outer part of the lagoon near Lions Park where mostly rock oysters (Isognomon spp) were observed, Lions Park was added as a 4 th site in the last 2 months of sampling, where 4 1-L water samples, 12 sediment samples and 153 bivalve samples were collected over 2 months. In sum, over the course of the project, 52 1L seawater samples, 185 sediment samples, 465 gastropods (comprised of Astraea rhodostoma, Neritina canalis, Nerita plicata, and Baltillaria spp) and 116 rock oysters (Isognomon spp), were collected across the 4 sites. Seawater samples were processed onto both 47mm glass fiber filters (Whatman, pore size 0.7 μm) and C18 solid phase extraction disks (Empore 3M) within 24 h of collection. All filters, sediments, and molluscs were frozen immediately after collection, and subsequently transported frozen to Arizona State University for extraction and analyses. Seawater and sediments Each 1-liter seawater sample was initially filtered through a 47mm glass fiber filter (Whatman, pore size 0.7 μm). For marine sediments, a 50 g subsample was collected from each thawed and homogenized sediment sample, and then 250 mL of a 2M NaCl solution was added to each sample and agitated/stirred for 6 min. Sediment samples were allowed to settle overnight, and the liquid fraction removed by glass pipette. This process was repeated 4 times for each sample. In controlled trials, approximately 2 g of 4 polymer-type microplastics (LDPE, HDPE, PVC, and PET) ranging in size from 0.3mm to 2mm were added to 50-gram composited sediment samples collected from Nu'uuli lagoon. The combined liquid extracts from each sample were then filtered through a 47mm glass fiber filter (Whatman, pore size 0.7 μm) and treated with 30% H 2 O 2 to remove any organic material if needed. In these spiked trials, approximately 85% or more of all microplastics were recovered, which is similar to iterative density separation techniques reported elsewhere (e.g. Thomas et al., 2020;Avio et al., 2015). Marine molluscs Half of all molluscs collected during each sampling event, or approximately 290 samples, were analyzed for microplastics. Each mollusc was weighed and measured before being shelled. After shelling, the whole tissue was weighed and placed in 30% H 2 O 2 and catalyzed with low heat (~50 C) for 4 h, and left to digest for 96 h. The resulting digestate for each sample was then filtered through a 47mm glass fiber filter (Whatman, pore size 0.7 μm). This method has been reported to have at least an 85% recovery rate for approximately 200-3000 μm sizedparticles (Tsangaris et al., 2021). Quality control To reduce the possibility of environmental contamination, when samples were not being actively processed, all samples and glass fiber filters remained covered with aluminum foil. During active extraction and analyses of microplastics, blank glass fiber filters were left out on the laboratory bench as well as subject to the same extraction and analyses procedures in order to account for any introduced microplastics during active laboratory analyses (Dehaut et al., 2019). All glass fiber filter blanks were visually analyzed for microplastics, and a subset was analyzed by micro-Raman. Based on the subset analyzed by micro-Raman, an average of less than 1 microfiber per sample could have been introduced. Microplastic identification All glass fiber filters were visually examined under an Olympus BX10 light microscope to visually count and map particles that looked to be either microplastic fragments or microplastic fibers based on shape, color and/or transparency. After microscopy, a random subsample of 30 glass fiber filters, or approximately 10% of the 291 marine invertebrate filtered samples, and 5 laboratory blanks to control for potentially introduced microplastics during analyses, were verified for microplastic presence and polymer type using micro-Raman (custom-built in a 180 geometry with an Andor 750 spectrometer). Based on this subsampling method, the number of microplastics reported by visual observation were estimated to be between 1.5 to 5 times higher than the number confirmed by micro-Raman, with an average overestimation factor of 3, which is lower than other studies that have reported up to 70% error rates in microscopy (Lusher et al., 2017). However, given the wide range of microplastic variation among samples, the relatively small number of microplastics detected per sample, and the larger objectives of this study, the number of microplastic particles reported here have not been corrected by the overestimation factor. Seawater and sediments After pre-filtration with glass fiber filters, all seawater samples were filtered through 47mm C-18 filter disks (Empore, pore size 12μm), which were then eluted with 5mL acetone, 5mL acetonitrile and 8 mL of hexane to pull of any captured organic contaminants, and then passed through a Na 2 SO 4 column to remove excess water. Each sediment sample was thawed, homogenized and sieved to < 5mm. A subsample of approximately 5 g was spiked with 60μg of p-terphenyl as recovery surrogate, and then homogenized in a porcelain mortar and pestle with 20g of Na 2 SO 4 to remove water. Homogenized samples were then spun on a rotor for 48-72 h in 60ml of 1:1 hexane:acetone. Solvents were decanted and filtered with a glass fiber filter, and then passed through a column of silica gel to remove polar compounds, and Na 2 SO 4 to remove water. Samples that were exceptionally high in organic sulfur (yellow coloration) were also passed through a column of Florisil (You and Lydy 2004). Marine molluscs Half of all molluscs sampled, or approximately 300 samples, were used for organic contaminant analyses. Each mollusc was weighed and measured before being shelled. After shelling, the whole tissue was weighed and spiked with 15μg of p-terphenyl as recovery surrogate, and then homogenized (e.g. mixed) in 1:4 parts Na 2 SO 4 to remove excess water. All homogenized invertebrate samples were then spun on a rotor for 48 h in 60ml of 1:1 hexane:acetone. Extracts were passed through several cleanup columns to remove larger molecules (e.g. Biobeads SX-3, BioRad) and polar compounds (e.g. Silica Gel), and occasionally Florisil if highly colored. Contaminant identification and quality control All samples were concentrated of a final volume of 0.5 ml using Nitrogen gas and then analyzed for organic contaminants using a using a Varian 3800 gas chromatograph in tandem with a Saturn 2200 electron ionization mass spectrometer. Minimum detection limits (MDL) were estimated by doubling the lowest standard concentration that showed a peak, with a signal-to-noise ratio greater than 3. To estimate method recoveries for bivalves and sediments, selected samples were homogenized and spiked with known concentrations of PCBs, pesticides, phthalates, and PAHs. Method recoveries ranged from 40% to 90% for PCBs, 25%-70% for pesticides, from 30% to 80% for phthalates, and from 20% to 90% for PAHs. Method recoveries for sea water are very similar, and are reported in Polidoro et al. (2017). All results presented are uncorrected for method recoveries. Risk assessment In order to estimate risk for human health, a screening-level hazard quotient approach was used (US EPA 1998), where risk ¼ dose of contaminant (in mg consumed per day, per kg body weight) divided by a relevant toxicological threshold for adverse health impacts for that contaminant (in mg of contaminant per kg body weight). To calculate contaminant dose, average body weights of 100 kg were used for adults and 50 kg for children (Rodriguez-Martinez et al., 2020), with estimated shelled seafood consumption rates of 200 g/day for adults and 100 g/day for children. For toxicological thresholds, the EPA oral reference doses (Oral RfD) for detected contaminants were used (Table 1). However, it is important to note that oral reference doses and/or other toxicological thresholds were not available for all contaminants detected. This is especially true for microplastics, which currently do not have any toxicological thresholds for adverse impacts related to number of particles consumed due to their extreme variation in chemical composition, size, shape, degradation rates, etc. Until more research is available on physical toxicological thresholds for microplastics ingestion, thresholds based on the chemical composition of microplastics are assumed to be the best surrogate (e.g. toxicological thresholds for plasticizers including phthalates, and other chemical additives). In order to directly compare consumption rates and different risk scenarios with contaminant results reported in parts per million (ppm), an Action Level for each detected contaminant with an available oral RfD was calculated based on the formula: Action Level (ppm or mg/kg) ¼ (Oral RfD (mg/kg) x body weight (kg))Äaverage daily serving size (kg). Calculation of an Action Level for different body weight and consumption rate scenarios allows for direct comparisons of detected contaminant concentrations (in ppm) with Action Levels (ppm). Where maximum or average detected contaminant concentrations in molluscs exceed calculated Action Levels, the risk of adverse health impacts is considered to be elevated. Microplastics Microplastic concentrations detected in nearshore marine waters were very low, and ranged from 0 to approximately 10 microfibers per liter across all sites, most of which were not able to be identified to polymer type. Microplastics in marine sediments were not well-detected (e.g. only few fibers detected in just a few samples), and therefore, given our visual observation rate error and background blank subtraction, sediment microplastic concentrations are not reported here. However, the sediment extraction method employed in this study may not work well for higher density microplastics comprised of PVC or PET, which have been shown to resistant to removal from saturated NaCl solutions (Thomas et al., 2020). In addition, some marine sediments collected from Nu'uuli lagoon and Pago Harbor were relatively high in organic material with fine silty-clay textures, which has been shown to significantly reduce recovery rates (Cashman et al., 2020). Microplastics in marine molluscs averaged between 15 and 17 particles per organism (minimum of 0 and maximum of 69) (Figure 2), of which about 55% of microplastics were in the shape of microfibers. Based on the subsampled 10% of molluscs analyzed by micro-Raman, approximately 75% of the identifiable plastic was polyethylene terephthalate (PET). Other polymers identified included polycarbonate (PC), polyamide (PA) and polyvinyl chloride (PVC). There were no observed differences in the number of microplastics observed in molluscs among different sites, even though mollusc species composition varied by site (Figure 3). Organic contaminants A wide range of phthalates, pesticides, polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs), were detected in seawater, sediments, and molluscs ( Figure 4). Based on equivalent concentration units, the average concentration of contaminants clearly increased from marine waters (less than 0.35 μg per Liter or ppb) < sediments (less than 0.05 μg per gram or ppm) < molluscs (less than 25 μg per gram or ppm). Across sampled sites, the widest variety of organic contaminants were detected in molluscs collected from Lauli'i. Although there were no observed differences among sites in terms of the average organic contaminant concentrations in marine waters, sediments and molluscs, molluscs from collected from Lauli'i had the highest maximum concentrations detected for PCBs and most pesticides. Risk assessment Based on the maximum concentrations detected of individual contaminants in molluscs (e.g. phthalates and pesticides in Figure 5), several detected pesticides, summed PCBs and di-ethylhexyl phthalate (DEHP) exceeded the calculated Adult and Child Action Levels (Table 1). Specifically, maximum summed PCB concentrations detected in molluscs from Lauli'i, inner Nu'uuli Lagoon and Lions Park exceeded Action Levels, while only maximum DEHP concentrations detected in molluscs from Pago Harbor exceeded calculated Action Levels. Eight of the detected pesticides in gastropods collected from Lauli'i Beach exceeded calculated Action Levels. Most importantly, Chlordane (e.g. including cis-Nonachlor and trans-Nonachlor which are bioaccumulating components of Chlordane), was detected in exceedances of Action Levels at all 4 sites, including in rock oysters from Lions Park, and as such, should be prioritized for further studies as a potential contaminant of concern. It is important to note that it is very unlikely that the American Samoan population is consuming these molluscs at the rates indicated, given informal observations on the decline in molluscs traditionally collected for consumption. Although Action Levels become more "protective" or lower with higher consumption rates and lower body weights, it is unlikely that the gastropods collected from Lauli'i, inner Nu'uuli Lagoon, and Pago Harbor pose a risk to human health, as they are probably being consumed at lower rates than used in the Action Level calculations. However, of higher concern are the levels of Chlordane and PCBs detected in rock oysters from Lions Park, which may be more regularly consumed. In terms of environmental and/or ecological risk, all of the marine sites sampled show varying levels of PCBs and pesticides, with the highest concentrations of these persistent contaminants detected at Lauli'i Beach. Discussion Overall, very few microplastics were found in near-shore marine waters and intertidal sediments, compared to intertidal molluscs. Although the concentrations of detected microplastics in marine waters were within the range of other studies (Burns and Boxall 2018;Bucci et al., 2020), microplastics were not well-detected in marine sediments. It is likely that microplastics in the intertidal areas sampled in American Samoa are not being uniformly deposited in this dynamic marine environment, but rather are patchily distributed and/or being carried out to offshore areas. Other studies have also shown that microplastic abundance in the intertidal zone is negatively associated with the strength of hydrological processes, including flow velocity and submergence time (Wu et al., 2020). Regardless, comparison of microplastic particles in sea water and marine sediments across different studies is increasingly problematic, given the extreme variation in extraction methods, temporal and spatial sampling regimes, net or filtration pore sizes, and the general lack of reporting of recovery rates (Cutroneo et al., 2020;Phoung et al., 2021). For example, studies of microplastics in seawater in the United Kingdom with 5μm sized filters found an average of only 1.5 to 6.7 particles per liter (Li et al., 2018), while other studies with much larger sample sizes (e.g. more water filtered) over larger spatial scales and with larger 300 μm pore-sized nets found between 8 to 9200 particles/m 3 (or 0.008 to 9.2 particles per liter) (Desforges et al., 2014). Given the low numbers of microplastics detected in marine sediments in our study, potential issues with the selected extraction methods cannot be ruled out (Cashman et al., 2020;Phuong et al., 2021). In a systematic review of 70 studies of microplastics extracted from marine sediments, Phoung et al. (2021) found that only 22 reported method recovery rates. Of the reviewed studies that used extraction methods similar to ours (e.g. wet sediment, NaCl floatation with or without H 2 O 2 treatment, and filtration through 0.7μm glass fiber filter), only two studies reported recovery rates (e.g. Fries et al., 2013;Karlsson et al., 2017), both of which reported recovery rates very similar to ours (e.g. greater than 80%). However, several field studies that have used similar methods of extraction found much higher numbers of microplastics in marine sediments compared to our study, including averages of 730-2300 particles per kg of sediment in tidal sediments in the United Kingdom (Blumenroder et al., 2017), and from 672 to 2175 particles per kg of sediment in marine lagoons near Venice, Italy (Vianello et al., 2013). Although Figure 4. a-c: Average summed organic contaminant concentrations in marine waters, sediments, and molluscs per site. Error bars represent standard deviation, number in parenthesis after site indicates number of total samples collected and analyzed over the study period. increased efforts are needed to include microplastic recovery rates in all field and laboratory studies, optimal recovery of most polymers found in the environment appear to be based on pre-sieving sediment through a 5mm mesh, floatation with NaCI or ZnCl 2 (rather than NaCl which may only be best only for polyethylene and polystyrene), followed by 30% H 2 O 2 digestion either before or after filtration (Phoung et al., 2021). Lastly, confirmation of polymers, regardless of size, needs to be confirmed through an appropriate method (e.g. FTIR, Raman, LDIR, or GCMS-pyrolysis). Regardless, microplastics are most certainly present in the near-shore marine environment in American Samoa as the filter feeding bivalves collected in our study contained very fine microfragments and microfibers likely from the water column, or in the case of gastropods, grazed from microplastic particles potentially trapped in algae or other plant material attached to hard substrate. Interestingly, other studies have found similar results in terms of the number of microplastics and microfibers in different species of intertidal molluscs. For example, a study of a variety of bivalves and gastropods in Hong Kong tidal flats also found a mean of approximately 0-18 particles per organism, dominated by microfibers, with a higher abundance found in gastropods . In another study of both bivalves and gastropods in the Persian Gulf, the mean number of microplastic particles was 0-21 particles, also dominated by microfibers (Naji et al., 2018). Across the different sites sampled in American Samoa, there were no significant differences in concentration of microplastics and organic contaminants. This indicates that microplastics and the detected contaminants are likely ubiquitous and diffuse across the study sites, and may be from different non-point sources of pollution, such as agriculture, industrial runoff, buried legacy waste, etc. However, on a wet weight basis, it is clear that molluscs are bioaccumulating more microplastics and organic contaminants compared to in-situ waters and intertidal sediments. What is not clear, is if the microplastics themselves are a primary vector of organic contaminant transport and bioaccumulation in marine molluscs, or if contaminants are accumulating in molluscs from the ambient environment. To address this data gap, chemicals could be extracted directly from the microplastics isolated from molluscs using for example, Pyrolysis-GCMS (Primpke et al., 2020). However, this can be limited and/or extremely difficult due to small sample sizes and the microscopic sizes of some microfibers. Under controlled settings, complementary feeding studies where the chemical composition of microplastics is determined before ingestion and after egestion, could also increase understanding of potential transfer or release of contaminants from microplastics into organism tissue. As such, it is increasingly important that laboratory feeding studies are environmentally relevant, and that more rigorous guidelines and protocols are established for environmental field studies to help unify microplastic sampling, extraction and identification methods. The maximum (but not the average) detected concentration of some organic contaminants (Chlordane, PCBs, and DEHP) quantified in bivalves and gastropods exceeded calculated Action Levels. These maximum concentration exceedances can indicate potential elevated risk of adverse health impacts for populations that regularly consume moderate to high amounts of these molluscs, and/or have body weights that are lower than those used here, especially as reported concentrations of organic contaminants were not corrected for method recoveries. However, marine molluscs are not as widely consumed as locally-caught fishes or other protein sources available in American Samoa, and the consumption rates used to calculate Action Levels are likely higher than actual consumption rates of the current population. Similarly, it is important to note that oral reference doses, and other safety standards, are calculated based on assumed chronic consumption by an adult of average height and weight on a regular basis over a long-term period of time, and not set on the premise of one-time consumption. Regardless, based on the known toxicological impacts of the contaminants with Action Level exceedances (US EPA 2020; US EPA 2003; US EPA 1991) certain populations of American Samoans may be at elevated risk for cancer from chronic exposure to DEHP, PCBs, or Chlordane. Although the sources of DEHP are likely from existing plastic pollution, PCBs and Chlordane have been essentially banned for use in the United States since the late 1970s. These contaminants may be legacy pollutants from agriculture, industrial or military activities prior to the 1980s (Polidoro et al., 2017). Further studies are needed to determine the source, transport pathways, ecological impacts and subsequent mitigation strategies for these and the other contaminants detected in American Samoan near-shore marine environments. Given the enormity of this task and the limited amount of data available on specific toxicological thresholds for most marine species and ecosystems, the impacts of microplastics and organic contaminants on marine populations and ecosystems could potentially be examined more efficiently within a trait-based risk assessment framework that ranks species' relative vulnerabilities to contaminants . Lastly, more research is needed to determine if physical thresholds for microplastic ingestion or environmental presence can be systematically linked to adverse ecological or negative health outcomes. Given the extreme variation in the composition, shape and size of microplastics in the environment, this will likely be very difficult to standardize, especially across different types of organisms. For example, the 48-hour acute toxicity (EC 50 ) of polyethylene microfragments (37.24 AE 11.76 μm) on the common test organism Daphnia magna was found to be 80 times higher than that of polyethylene microbeads (37.05 AE 3.96 μm), potentially due to the irregular shape and high specific surface area of fragments vs. beads (Na et al., 2021). Similarly, smaller polystyrene microbeads (7.3 μm) have been found to significantly reduce algal feeding in the marine copepod Centropages typicus compared with larger polystyrene microbeads (20.6 μm) (Cole et al., 2013). One option towards harmonization of physical (e.g. size, shape, polymer type) toxicological thresholds for microplastics could be through the development of standards for microplastic particulates similar to those for total suspended solids and/or total dissolved solids in current drinking and surface water regulation. Conclusions In conclusion, ecological and human health risk assessment frameworks can help to prioritize contaminants, species, geographical areas and selected populations for contaminant mitigation and improved management actions. As molluscs are an important source of protein across the globe, this study provides a framework for scientific or regulatory agencies working in similar, data-poor regions, to conduct screening-level risk assessments using in-situ, baseline studies at the local or regional scale. Additionally, this project relied on extensive participatory training, education, and capacity building opportunities for local researchers, community fishers, community college students, and the general public in American Samoa, which will not only strengthen local career opportunities and skillsets, but will also increase community awareness and action to reduce microplastic, solid waste and other pollutants in near-shore coastal ecosystems. Although the amounts of microplastics detected in marine molluscs in American Samoa were somewhat comparable with other studies, the amounts of microplastics detected in marine waters were very low, and basically negligible in marine sediments. These results show the critical importance of continued method development for optimizing extraction of different sizes, shapes, and types of microplastics from widely variable environmental media. Additionally, field collection of environmental samples must consider that microplastics are not evenly distributed across the land or seascape. Rather, microplastics are highly likely to be patchily distributed, with higher concentrations in some areas compared to others, due to varying oceanographic, organismal and polymer conditions that control input, transport, deposition, uptake, degradation and accumulation of microplastics. Further studies are also needed to address both the chemical and physical impacts of microplastic ingestion on human and marine species health, for use within risk assessment frameworks. However, given that the physical impacts of microplastic ingestion on organism health is highly dependent not only on the amounts of microplastics ingested, but also their shape, size, chemical composition, and egestion or excretion rate, it seems unlikely that an impacts threshold for physical ingestion of microplastics can be feasibly developed for a wide range of organisms (including humans). At present, characterization of the chemical constituents of microplastics (including polymer additives and sorbed or associated environmental contaminants) can at the very minimum, provide a measure of the potential chemical impacts of plastics, based on oral or other exposures for different organisms (including humans). These chemical exposures, or doses, can then be compared to available data on health or ecosystem impacts, based on the oral reference doses or other relevant toxicological thresholds. Author contribution statement Beth Polidoro: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Tiffany Lewis: Performed the experiments; Contributed reagents, materials, analysis tools or data. Cassandra Clement: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Funding statement This work was supported by National Ocean Service (NA17_NOS9990026). Data availability statement Data associated with this study has been deposited at NOAA Marine Debris Clearinghouse under the Project ID NA17NOS9990026. Declaration of interests statement The authors declare no conflict of interest. Additional information No additional information is available for this paper.
2022-03-16T15:26:52.945Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "1116c51bc14853f0b75055bc317d4e5192b17070", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "dd64aab0eeecc198ffbcfba87d217526784ccf70", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
119361356
pes2o/s2orc
v3-fos-license
Higgs mass prediction in the MSSM at three-loop level in a pure $\overline{\text{DR}}$ context The impact of the three-loop effects of order $\alpha_t\alpha_s^2$ on the mass of the light CP-even Higgs boson in the MSSM is studied in a pure $\overline{\text{DR}}$ context. For this purpose, we implement the results of Kant et al. into the C++ module Himalaya and link it to FlexibleSUSY, a Mathematica and C++ package to create spectrum generators for BSM models. The three-loop result is compared to the fixed-order two-loop calculations of the original FlexibleSUSY and of FeynHiggs, as well as to the result based on an EFT approach. Aside from the expected reduction of the renormalization scale dependence with respect to the lower order results, we find that the three-loop contributions significantly reduce the difference from the EFT prediction in the TeV-region of the SUSY scale $M_S$. Himalaya can be linked also to other two-loop $\overline{\text{DR}}$ codes, thus allowing for the elevation of these codes to the three-loop level. Introduction The measurement of the Higgs boson mass at the Large Hadron Collider (LHC) represents a significant constraint on the viability of supersymmetric (SUSY) models. Given a particular SUSY model, the mass of the Standard Model-like Higgs boson is a prediction, which must be in agreement with the measured value of (125.09 ± 0.21 ± 0.11) GeV [2]. Noteworthy, the experimental uncertainty on the measured Higgs mass has already reached the per-mille level. Theory predictions in SUSY models, however, struggle to reach the same level of accuracy. The reason is that the Higgs mass receives large higher order corrections, dominated by the top Yukawa and the strong gauge coupling [3][4][5]. Both of these two couplings are comparatively large, leading to a relatively slow convergence of the perturbative series. Furthermore, the scalar nature of the Higgs implies corrections proportional to the square of the top-quark mass, on top of the top-mass dependence due to the Yukawa coupling, which enters the loop corrections quadratically. On the other hand, corrections from SUSY particles are only logarithmic in the SUSY particle masses due to the assumption of only soft SUSY-breaking terms. If the SUSY particles are not too far above the TeV scale [6,7], the SUSY Higgs mass can be obtained from a fixed-order calculation of the relevant one-and two-point functions with external Higgs fields. In this case, higher order corrections up to the three-loop level are known in the Minimal Supersymmetric Standard Model (MSSM) [1,5,[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. There are plenty of publicly available computer codes which calculate the Higgs pole mass(es) in the MSSM at higher orders: CPsuperH [24][25][26], FeynHiggs [9,[27][28][29][30][31], Flexi-bleSUSY [32,33], H3m [1,20], ISASUSY [34], MhEFT [35], SARAH/SPheno [36][37][38][39][40][41][42], SOFTSUSY [43,44], SuSpect [45] and SusyHD [46]. FeynHiggs adopts the on-shell scheme for the renormalization of the particle masses, while all other codes express their results in terms of MS/DR parameters. All these schemes are formally equivalent up to higher orders in perturbation theory, of course. The numerical difference between the schemes is one of the sources of theoretical uncertainty on the Higgs mass prediction, however. All of these programs take into account one-loop corrections, most of them also leading twoloop corrections. H3m is the only one which includes three-loop corrections of order α t α 2 s , where α t is the squared top Yukawa and α s is the strong coupling. It combines these terms with the on-shell two-loop result of FeynHiggs after transforming the O(α t ) and O(α t α s ) terms from there to the DR scheme. Here we present an alternative implementation of the O α t α 2 s contributions of Refs. [1,20] for the light CP-even Higgs mass in the MSSM into the framework of FlexibleSUSY [32], referring to the combination as FlexibleSUSY+Himalaya in what follows. This allows us to study the effect of the three-loop contributions in a pure DR environment, i.e. without the trouble of combining the corrections with an on-shell calculation. The threeloop terms are provided in the form of a separate C++ package, named Himalaya, which one should be able to include in any other DR code without much effort. The Himalaya package and the dedicated version of FlexibleSUSY which incorporates the three-loop contributions from Himalaya, can be downloaded from Refs. [47,48], respectively. In this way, we hope to contribute to the on-going effort of improving the precision of the Higgs mass prediction in the MSSM. In the present paper we study the impact of the three-loop corrections for low and high SUSY scales and compare our results to the two-loop calculations of the public spectrum generators of FlexibleSUSY and FeynHiggs. By quantifying the size of the three-loop corrections, we also provide a measure for the theoretical uncertainty of the DR fixed-order calculation. As will be shown below, the implementation of the α t α 2 s corrections also applies to the terms of order α b α 2 s , where α b is the bottom Yukawa coupling. Therefore, Himalaya will take such terms into account, and we will refer to the sum of top-and bottom-Yukawa induced supersymmetric QCD (SQCD) corrections as O α t α 2 s + α b α 2 s in what follows. However, it should be kept in mind that this does not include effects of order α 2 s √ α t α b , which arise from three-loop Higgs self energies involving both a top/stop and a bottom/sbottom triangle. The results of Himalaya are thus unreliable in the (rather exotic) case where α t and α b are comparable in magnitude. The remainder of this paper is structured as follows. Section 2 describes the form in which the three-loop contributions of order (α t +α b )α 2 s are implemented in Himalaya. Its input parameters are to be provided in the DR scheme at the appropriate perturbative order. Section 3 details how this input is prepared in the framework of FlexibleSUSY. It also summarizes all the contributions that enter the final Higgs mass prediction in FlexibleSUSY+Himalaya. Section 4 analyzes the impact of various three-loop contributions on this prediction as well as the residual renormalization scale dependence, and it compares the results obtained with FlexibleSUSY+Himalaya to existing fixed-order and resummed results for the light Higgs mass. In particular, this includes a comparison to the original implementation of the three-loop effects in H3m. Our conclusions are presented in Section 5. Technical details of Himalaya, its link to FlexibleSUSY, and run options are collected in the appendix. Higgs mass prediction at the three-loop level in the MSSM The results for the three-loop α t α 2 s corrections to the Higgs mass in the MSSM have been obtained in Refs. [1,20] by a Feynman diagrammatic calculation of the relevant one-and two-point functions with external Higgs fields in the limit of vanishing external momenta. The dependence of these terms on the squark and gluino masses was approximated through asymptotic expansions, assuming various hierarchies among the masses of the SUSY particles. For details of the calculation we refer to Refs. [1,20]. Selection of the hierarchy A particular set of parameters typically matches several of the hierarchies mentioned above. In order to select the most suitable one, Ref. [1] suggested a pragmatic approach, namely the comparison of the various asymptotic expansions to the exact expression at two-loop level. Himalaya also adopts this approach, but introduces a few refinements in order to further stabilize the hierarchy selection (see also Ref. [49]). In a first step the Higgs pole mass M h is calculated at the two-loop level at order α t α s using the result of Ref. [12] in the form of the associated FORTRAN code provided by the authors. We refer to this quantity as M DSZ h in what follows. Subsequently, for all hierarchies i which fit the given mass spectrum, M h is calculated again using the expanded expressions of Ref. [1] at the two-loop level, resulting in M h,i . In the original approach of Ref. [1], the hierarchy is selected as the value of i for which the difference is minimal. However, we found that this criterion alone causes instabilities in the hierarchy selection in regions where several hierarchies lead to similar values of δ 2L i . We therefore refine the selection criterion by taking into account the quality of the convergence in the respective hierarchies, quantified by ( While M h,i includes all available terms of the expansion in mass (and mass difference) h the highest terms of the expansion for the mass (and mass difference) ratio j are dropped. We then define the "best" hierarchy to be the one which minimizes the quadratic mean of Eqs. (1) and (2), The relevant analytical expressions for the three-loop terms of order α t α 2 s to the CP-even Higgs mass matrix in the various mass hierarchies are quite lengthy. However, they are accessible in Mathematica format in the framework of the publicly available program H3m. We have transformed these formulas into C++ format and implemented them into Himalaya. The hierarchies defined in H3m equally apply to the top and the bottom sector of the MSSM, so that the results of Ref. [1] can also be used to evaluate the corrections of order α b α 2 s to the Higgs mass matrix. Indeed, Himalaya takes these corrections into account. However, as already pointed out in Section 1, a complete account of the top-and bottom-Yukawa effects to order α 2 s would require to include the contribution of diagrams which involve both top/stop and bottom/sbottom loops at the same time. These were not considered in Ref. [1], and thus the Himalaya result should only be used in cases where such mixed √ α t α b terms can be neglected. Modified DR scheme By default, all the parameters of the calculation are renormalized in the DR scheme. However, in this scheme, one finds artificial "non-decoupling" effects [12], meaning that the two-and three-loop result for the Higgs mass depends quadratically on a SUSY particle mass if this mass gets much larger than the others. Such terms are avoided by transforming the stop masses to a non-minimal scheme, named MDR (modified DR) in Ref. [1], which mimics the virtue of the on-shell scheme of automatically decoupling the heavy particles. If the user wishes to use this scheme rather than pure DR, Himalaya writes the Higgs mass matrix aŝ where M andM are the Higgs mass matrices in the DR and the MDR scheme, respectively, M tree =M tree is the tree-level expression, and the superscript (x) denotes the term of order x ∈ {α t , α s , α t α s , . . .}. The ellipsis in Eq. (4) symbolizes any terms that involve coupling constants other than α t or α s , or higher orders of the latter. For brevity we suppress the stop mass indices "1" and "2" here. Himalaya provides the numerical results for M (αtα 2 s ) (mt) as well as where the MDR stop massmt is calculated from its DR value mt by the conversion formulas through O α 2 s , provided in Ref. [1]. Note that these conversion formulas depend on the underlying hierarchy, and may be different for mt ,1 and mt ,2 . Even if the result is requested in the MDR scheme, the output of Himalaya can thus be directly combined with pure DR results through O(α t α s ) according to Eq. (4) in order to arrive at the mass matrix at order α t α 2 s . Of course, one may also request the plain DR result from Himalaya, in which case it will simply return the numerical value for M (αtα 2 s ) (mt) which can be directly added to any two-loop DR result. In any case, the difference between the DR and MDR result is expected to be quite small unless the mass splitting between one of the stop masses and other, heavier, strongly interacting SUSY particles becomes very large. As a practical example, in Figure 1 we show the difference of the lightest Higgs mass at the three-loop level calculated in the DR and MDR scheme. All DR soft-breaking mass parameters, the µ parameter of the MSSM super-potential, and the running CP-odd Higgs mass are set equal to M S here. The running trilinear couplings, except A t , are chosen such that the sfermions do not mix. The DR stop mixing parameter X t = A t − µ/ tan β is left as a free parameter. For this scenario we find that the difference between the DR and MDR scheme is below 100 MeV for different values of the stop mixing parameter. Note that for all terms in the Higgs mass matrix except α t , α t α s , and α t α 2 s , it is perturbatively equivalent to use either the DR or the MDR stop mass as defined above. Predominantly, this concerns the electroweak contributions as well as the terms of order α 2 t . In this paper, we use the DR stop mass for these contributions. Determination of the MSSM DR parameters FlexibleSUSY determines the running DR gauge and Yukawa couplings as well as the running vacuum expectation value of the MSSM along the lines of Ref. [50] by setting The couplings α MSSM em (M Z ) and α MSSM s (M Z ) are calculated from the corresponding input parameters as where the threshold corrections ∆α i (M Z ) have the form The DR weak mixing angle in the MSSM, θ w , is determined at the scale M Z from the Fermi constant G F and the Z pole mass via the relation where Here, Σ V,T (p 2 ) denotes the transverse part of the DR-renormalized one-loop self energy of the vector boson V in the MSSM. The vertex and box contributions δ VB as well as the two-loop contributions δ (2) r are taken from Ref. [50]. The DR vacuum expectation values of the up-and down-type Higgs doublets are calculated by where tan β(M Z ) is an input parameter and m Z (M Z ) is the Z boson DR mass in the MSSM, which is calculated from the Z pole mass at the one-loop level as In order to calculate the Higgs pole mass in the DR scheme at the three-loop level O α t α 2 s + α b α 2 s , the DR top and bottom Yukawa couplings must be extracted from the input parameters M t and m s . In order to achieve that, we make use of the known two-loop SQCD contributions to the top and bottom Yukawa couplings of Refs. [51][52][53][54], as described in the following: We calculate the DR Yukawa couplings y t at the scale M Z from the DR top mass m t and the DR up-type VEV v u as In our approach, we relate the DR top mass to the top pole mass M t at the scale M Z as where Σ S,L,R t (p 2 , Q) denote the scalar (superscript S), and the left-and right-handed parts (L, R) of the DR renormalized one-loop top self-energy without the gluon, stop, and gluino contributions, and ∆m (1),SQCD t and ∆m (2),SQCD t are the full one-and two-loop SQCD corrections taken from Refs. [51,52], In Eq. (22), it is C F = 4/3 and s 2θt = sin 2θ t , with θ t the stop mixing angle. The two-loop term ∆m (2),dec t is given in Ref. [51] for general stop, sbottom, and gluino masses. The MSSM DR bottom-quark Yukawa coupling y b is calculated from the DR bottomquark mass m b and the down-type VEV at the scale M Z as and the DR bottom mass in the MSSM calculated in Refs. [53,54]. Note that the matching of the SM to the MSSM leads to large logarithmic contributions in the MSSM DR parameters in the case of a heavy SUSY particle spectrum. These contributions can be resummed in a so-called EFT approach [31,33,46,55,56]. Calculation of the CP-even Higgs pole masses FlexibleSUSY calculates the two CP-even Higgs pole masses M h and M H by diagonalizing the loop-corrected mass matrix 1 at the momenta p 2 = M 2 h and p 2 = M 2 H , respectively (M 2L and M 3L are evaluated at p 2 = 0). The one-loop correction M 1L (p 2 ) contains the full one-loop MSSM Higgs self energy and tadpole contributions, including electroweak corrections and the momentum dependence. The two-loop correction M 2L contains the known corrections of order [12][13][14][15][16]. The three-loop correction M 3L incorporates the terms of order O α t α 2 s + α b α 2 s from the Himalaya package, as described in Section 2. In Eq. (29) all contributions are defined in the DR scheme by default. 2 The renormalization scale is chosen to be Q = √ mt ,1 mt ,2 and the DR parameters which enter Eq. (29) are evolved to that scale by using the three-loop RGEs of the MSSM [57,58]. Since the two CP-even Higgs pole masses are the output of the diagonalization of M but at the same time must be inserted into M 1L (p 2 ), an iteration over the momentum is performed for each mass eigenvalue until a fixed point for the Higgs masses is reached with sufficient precision. Size of three-loop contributions from different sources In the DR calculation within FlexibleSUSY+Himalaya, there are three sources of contributions which affect the Higgs pole mass at order O α t α 2 s + α b α 2 s : The one-loop threshold correction O(α s ) to the strong coupling constant, the two-loop threshold correction O α 2 s to the top and bottom Yukawa couplings, and the genuine three-loop contribution to the Higgs mass matrix. In Figure 2, the impact of these three sources on the Higgs pole mass is shown relative to the two-loop calculation without these three corrections. The left panel shows the impact as a function of the SUSY scale M S , and the right panel as a function of the relative stop mixing parameter X t /M S for the scenario defined in Section 2.2. First, we observe that the inclusion of the one-loop threshold correction to α s , Eq. (13), (blue dashed line) leads to a significant positive shift of the Higgs pole mass of around +2.5 GeV for M S ≈ 1 TeV. For larger SUSY scales the shift increases logarithmically as is to be expected from the logarithmic terms on the r.h.s. of Eq. (13). The inclusion of the full two-loop SQCD corrections to y t (green dash-dotted line) leads to a shift of similar magnitude, but in the opposite direction (the effect due to y b is negligible). Thus, there is a significant cancellation between the three-loop contributions from the one-loop Note that the nominal two-loop result of the original FlexibleSUSY (i.e., without Himalaya) includes by default the one-loop threshold correction to α s and the SM QCD two-loop contributions to the top Yukawa coupling [32,33]. This means that the twoloop Higgs mass as evaluated by the original FlexibleSUSY already incorporates partial three-loop contributions. As a result, the two-loop result of the original FlexibleSUSY does not correspond to the zero-line in Figure 2, but is rather close to the blue dashed line. This implies that, compared to the two-loop result of the original FlexibleSUSY, the effect of the remaining α t α 2 s contributions in the Higgs mass prediction is negative. Scale dependence of the three-loop Higgs pole mass To estimate the size of the missing higher-order corrections, Figure 3 shows the renormalization scale dependence of the one-, two-and three-loop Higgs pole mass for the scenario defined in Section 2.2 with tan β = 5 and X t = 0. The one-and two-loop Figure 3: Variation of the Higgs pole mass when the renormalization scale is varied by a factor two at which the Higgs pole mass is calculated, for tan β = 5 and X t = 0. calculations correspond to the original FlexibleSUSY. In the one-loop calculation the threshold corrections to α s and y t are set to zero, and in the two-loop calculation the oneloop threshold corrections to α s and the two-loop QCD corrections to y t are taken into account. The three-loop result of FlexibleSUSY+Himalaya includes all three-loop contributions at (α t +α b )α 2 s discussed above, i.e. the one-loop threshold correction to α s , the full two-loop SQCD corrections to y t,b , and the genuine three-loop correction to the Higgs pole mass from Himalaya. In addition, the Higgs mass predicted at the two-loop level in the pure EFT calculation of HSSUSY is shown as the black dotted line, see Section 4.3. The bands show the corresponding variation of the Higgs pole mass when the renormalization scale is varied using the three-loop renormalization group equations [57][58][59][60][61][62][63] for all parameters except for the vacuum expectation values, where the β-functions are known only up to the two-loop level [64,65]. In FlexibleSUSY and FlexibleSUSY+Himalaya, the renormalizaion scale is varied in the full MSSM within the interval [M S /2, 2M S ], while in HSSUSY it is varied in the Standard Model within the interval [M t /2, 2M t ], keeping the matching scale fixed at M S . The plot shows that the successive inclusion of higher-order corrections reduces the scale dependence, as expected. In particular, the three-loop corrections to the Higgs mass reduce the scale dependence by around a factor two, compared to the two-loop calculation. The scale dependence of HSSUSY is almost independent of M S , because scale variation is done within the SM after integrating out all SUSY particles at M S . Note that the variation of the renormalization scale only serves as an indicator of the theoretical uncertainty due to missing higher order effects. Comparison with lower order and EFT results In Figures 4-5 (M Z ) = 0.1184 and G F = 1.1663787 · 10 −5 GeV −2 . All DR soft-breaking mass parameters as well as the µ parameter of the super-potential in the MSSM, and the running CP-odd Higgs mass are set equal to M S . The running trilinear couplings, except for A t , are chosen such that there is no sfermion mixing. The stop mixing parameter X t = A t − µ/ tan β is defined in the DR scheme and left as a free parameter. The lightest CP-even Higgs pole mass is calculated at the scale Q = √ mt ,1 mt ,2 . quartic Higgs coupling of the Standard Model at O(α t (α t + α s )) when integrating out the SUSY particles at a common SUSY scale [46,55]. Renormalization group running is performed down to the top mass scale using the three-loop RGEs of the Standard Model [59][60][61][62][63] and finally the Higgs mass is calculated at the two-loop level in the Standard Model at order O(α t (α t + α s )). In terms of the implemented corrections, HSSUSY is equivalent to SusyHD [46], and resums large logarithms up to NNLL level while neglecting terms of order v 2 /M 2 S . The O v 2 /M 2 S corrections calculated in Ref. [66] have not been taken into account here. Consider first Figure 4. The left panel shows the Higgs mass prediction as a function of M S according to three codes discussed above, together with the FlexibleSUSY+Himalaya result (solid red). The stop mixing parameter X t is set to zero. The right panel shows the difference of these curves to the latter. Note that the resummed result of HSSUSY neglects terms of order v 2 /M 2 S , and thus forfeits reliability towards lower values of M S . The deviation from the fixed order curves below M S ≈ 400 GeV clearly underlines this. In contrast, the fixed order results start to suffer from large logarithmic contributions toward large M S , which on the other hand are properly resummed in the HSSUSY approach. From Figure 4, we conclude that the fixed-order DR result loses its applicability once M S is larger than a few TeV, while the deviation between the non-resummed on-shell result of FeynHiggs and HSSUSY increases more rapidly above M S ≈ 1 TeV. Note that the good agreement of FlexibleSUSY with HSSUSY above the few-TeV region is accidental, as shown in Ref. [33]. The effect of the three-loop α t α 2 s terms on the fixed-order result is negative, as discussed in Section 4.1, and amounts to a few hundred MeV in the region where the fixed-order approach is appropriate. They significantly improve the agreement between the fixedorder and the resummed prediction for M h in the intermediate region of M S , where both approaches are expected to be reliable. Between M S of about 500 GeV and 5 TeV, our three-loop curve from FlexibleSUSY+Himalaya deviates from the HSSUSY result by less than 300 MeV. This corroborates the compatibility of the two approaches in the intermediate region. Considering the current estimate of the theoretical uncertainty in the Higgs mass prediction [28,33,46,55,67], our observation even legitimates a naive switching between the fixed-order and the resummed approach at M S ≈ 1 TeV, instead of a more sophisticated matching procedure along the lines of Ref. [31,56]. Nevertheless, the latter is clearly desirable through order α t α 2 s , in particular in the light of the observations for non-zero stop mixing to be discussed below, but has to be deferred to future work at this point. Figure 5 shows the three-loop effects as a function of X t , where the value of M S = 2 TeV is chosen to be inside the intermediate region. The figure shows that, for |X t | 3M S , the qualitative features of the discussion above are largely independent of the mixing parameter, whereupon the quantitative differences between the fixed-order and the resummed results are typically larger for non-zero stop mixing. Figure 6 underlines this by setting X t = − √ 6M S and varying M S . The kink in the three-loop curve originates from a change of the optimal hierarchy chosen by Himalaya. The red band shows the uncertainty δ i as defined in Eq. (3), which is used to select the best fitting hierarchy. We find that δ i is comparable to the size of the kink, which indicates a reliable treatment of the hierarchy selection criterion. Comparison with other three-loop results The three-loop O α t α 2 s corrections to the light MSSM Higgs mass discussed in this paper were originally implemented in the Mathematica code H3m. We checked that the implementation of the α t and α t α s terms in Himalaya leads to the same numerical results as in H3m, if the same set of DR parameters is used as input. Since the α t α 2 s terms of Himalaya are derived from their implementation in H3m, it is not surprising that they also result in the same numerical value if the same set of input parameters is given and the same mass hierarchy is selected. But since Himalaya has a slightly more sophisticated way of choosing this hierarchy (see Section 2.1), its numerical α t α 2 s contribution does occasionally differ slightly from the one of H3m. In Figure 7 we compare our results to the three-loop calculation presented in Ref. [68], assuming the input parameters for the "heavy sfermions" scenario defined in detail in the example folder of Ref. [69]. In the left panel the blue circles show the H3m result, including only the terms of O α t + α t α s + α t α 2 s , where the MSSM DR top mass is calculated using the "running and decoupling" procedure described in Ref. [68]. The black crosses show the same result, except that the DR top mass at the SUSY scale is taken from the spectrum generator FlexibleSUSY+Himalaya. We can reproduce the latter result with FlexibleSUSY+Himalaya if we take the same terms into account, i.e., O α t + α t α s + α t α 2 s ; see the dotted red line in Figure 7. The small differences between the two results are due to the fact that H3m works with on-shell electroweak parameters, while FlexibleSUSY+Himalaya uses DR parameters. The inclusion of all one-loop contributions to M h and the momentum iteration reduces the Higgs mass by 4-6 GeV, as shown by the red dashed line. Including all two-and three-loop corrections which are available in FlexibleSUSY+Himalaya, i.e., O (α t + α b )α s + (α t + α b ) 2 + α 2 τ + (α t + α b )α 2 s , further reduces the Higgs mass by up to 2 GeV, as shown by the red solid line. 4 The right panel of Figure 7 shows again our one-, two-, and three-loop predictions obtained with FlexibleSUSY, FlexibleSUSY+Himalaya, as well as the EFT result of HSSUSY. Similar to Figure 4, we observe that the higher-order terms lower the predicted Higgs mass and bring it closer to the resummed result. A detailed comparison of FlexibleSUSY+ Himalaya to a result where H3m is combined with the lower-order results of FeynHiggs is beyond the scope of this paper and left to a future publication. h HSSUSY Figure 7: Comparison of the lightest Higgs pole mass calculated at the one-, twoand three-loop level with FlexibleSUSY, FlexibleSUSY+Himalaya, H3m and HSSUSY as a function of the SUSY scale for the "heavy sfermions" scenario of Ref. [68]. The horizontal orange band shows the measured Higgs mass M h = (125.09 ± 0.32) GeV including its experimental uncertainty. Fig. 1 of Ref. [70], 6 we observe a reduction of M h towards higher loop orders, thus leading to the opposite conclusion of a heavy SUSY spectrum in this scenario, given the current experimental value for the Higgs mass. Reassuringly, the higher order corrections move the fixed-order result closer to the resummed result, leading to agreement between the two at the level of about 1 GeV even at comparatively large SUSY scales. Conclusions We have presented the implementation Himalaya of the three-loop O α t α 2 s + α b α 2 s terms of Refs. [1,20] for the light CP-even Higgs mass in the MSSM, and its combination with the DR spectrum generator framework FlexibleSUSY. These three-loop contributions have been available in the public program H3m before, where they were combined with the onshell calculation of FeynHiggs. With the implementation into FlexibleSUSY presented here, we were able to study the size of the three-loop contributions within a pure DR environment. Despite the fact that the genuine O α t α 2 s corrections are positive [1], the combination with the two-loop decoupling terms in the top Yukawa coupling lead to an overall reduction of the Higgs mass prediction relative to the "original" two-loop Flexi-bleSUSY result by about 2 GeV, depending on the value of the stop masses and the stop mixing. This moves the fixed-order prediction for the Higgs mass significantly closer to the result obtained from a pure EFT calculation in the region where both approaches are expected to give sensible results. Contributions of order O α b α 2 s are found to be negligible in all scenarios studied here. To indicate the remaining theory uncertainty due to higher order effects, we have varied the renormalization scale which enters the calculation by a factor two. The results show that the inclusion of the three-loop contributions reduces the scale uncertainty of the Higgs mass by around a factor two, compared to a calculation without the genuine three-loop effects. We conclude that our implementation leads to an improved CP-even Higgs mass prediction relative to the two-loop results. Our implementation of the three-loop terms should be useful also for other groups that aim at a high-precision determination of the Higgs mass in SUSY models. Acknowledgments We would like to thank Luminita Mihaila, Matthias Steinhauser, and Nikolai Zerf for helpful comments on the manuscript, and valuable help in the comparison with H3m. Further thanks go to Pietro Slavich for his valuable comments, in particular for pointing out an inconsistency in Section 3.1 of the original manuscript. Alexander Bednyakov kindly provided the general two-loop SQCD corrections to the running top and bottom masses in the MSSM in Mathematica format. RVH would like to thank the theory group at NIKHEF, where part of this work was done, for their kind hospitality. AV would like to thank the Institute for Theoretical Physics (ITP) in Heidelberg for its warm hospitality. Financial support for this work was provided by DFG. A Installation of Himalaya Himalaya can be downloaded as compressed package from [47]. After the package has been extracted, Himalaya can be configured and compiled by running cd $HIMALAY_PATH mkdir build cd build cmake .. make where $HIMALAY_PATH is the path to the Himalaya directory. When the compilation has finished, the build directory will contain the Himalaya library libHimalaya.a. For convenience, a library named libDSZ.a is created in addition, which contains the two-loop O(α t α s ) corrections from Ref. [12]. B Installation of FlexibleSUSY with Himalaya We provide a dedicated version of FlexibleSUSY 1.7.4, which uses Himalaya to calculate the Higgs pole mass at the three-loop level. This package contains three pre-generated MSSM models: • MSSMNoFVHimalaya: This model represents the MSSM without (s)fermion flavour violation, where tan β is fixed at the scale M Z and the other SUSY parameters are fixed at a user-defined input scale. The parameters µ and Bµ are fixed by the electroweak symmetry breaking conditions. The SUSY mass spectrum, including the Higgs pole masses, is calculated at the scale Q = √ mt ,1 mt ,2 , where mt ,i are the two DR stop masses. • MSSMNoFVatMGUTHimalaya: This is the same model as the MSSMNoFVHimalaya, except that the input scale is the GUT scale M X , defined to be the scale where g 1 (M X ) = g 2 (M X ). The file LesHouches.out.MSSMNoFVHimalaya will then contain the SUSY particle spectrum in SLHA format. Alternatively, the Mathematica interface of FlexibleSUSY can be used: math -run " < < \" models / MSSMNoFVHimalaya / run_MSSMNoFVHimalaya . m \"" For each model an example SLHA input file and an example Mathematica script can be found in models/<model>/. (M Z ) as described in Section 3.1. To achieve that in FlexibleSUSY, the global threshold correction loop order (EXTPAR [7]) must be set to 1 (or higher) and the specific threshold correction loop order for α s (3rd digit from the right in EXTPAR [24]) must be set to 1 (or higher) in the SLHA input file. See the next paragraph for an example. [51][52][53][54] have been implemented into FlexibleSUSY. They must be activated by setting the global threshold correction loop (EXTPAR [7]) order to 2 and by setting the threshold correction loop order for y t and y b (7th and 8th digit from the right in EXTPAR [24]) to 2 in the SLHA input file: Here, <model> is the used FlexibleSUSY model from above, i.e. either MSSMNoFVHimalaya, MSSMNoFVatMGUTHimalaya or NUHMSSMNoFVHimalaya. C Configuration options to calculate the Higgs mass at three-loop level with FlexibleSUSY Three-loop corrections to the CP-even Higgs mass. To use the three-loop corrections of order O α t α 2 s + α b α 2 s to the light CP-even Higgs mass in the MSSM from Refs. [1,20], the pole mass and EWSB loop orders must be set to 3 in the SLHA input file. In addition, the individual three-loop corrections should be switched on, by setting the flags 26 and 27 to 1. The user can select between the DR and MDR scheme for the three-loop corrections by setting the flag 25 to 0 or 1, respectively: Three-loop renormalization group equations. Optionally, the known three-loop renormalization group equations can be used to evolve the MSSM DR parameters from M Z to M S [57,58]. To activate the three-loop RGEs, the β function loop order must be set to 3 in the SLHA input file: At the Mathematica level we recommend to use: D Himalaya interface Input parameters. To calculate the three-loop corrections to the light CP-even Higgs pole mass at order O α t α 2 s + α b α 2 s with Himalaya, the set of DR parameters is needed, which is shown in the following code snippet. The parameters are stored in the struct Parameters which contains the following members: Here, the integer mdrFlag is optional and can be used to switch between the DR-(0) and the MDR-scheme (1). The DR-scheme is chosen as default. The returned object holds all information of the hierarchy selection process, such as the best fitting hierarchy, or the relative error δ 2L i 0 /M DSZ h , where δ 2L i is defined in Eq. (1), and i 0 denotes the "optimal" hierarchy as determined by the procedure of Section 2.1. The latter represents a lower limit on the expected accuracy of the expansion by comparison to the exact two-loop result M DSZ h . In addition to that, the HierarchyObject offers a set of member functions which provide access to all intermediate results. These functions are summarized in Table 1. The selection method described in Section 2 is also applied to the (s)bottom contri- getExpUncertainty(int loops) Returns the uncertainty of the expansion at the given loop order (cf. Section 2.1). getDMh(int loops) Returns the Higgs mass matrix proportional to α t or α b at the given loop order. Note that at the two-loop level only corrections of order O(α t α s ) are considered. Its arguments are a HierarchyObject, the Higgs mass matrix massMatrix up to the loop order of interest, and three flags (oneLoopFlag, twoLoopFlag, threeLoopFlag) to define the desired loop orders. Using the member function calculateDMh, the returned HierarchyObject provides the user with the quantity δ conv
2017-11-29T20:44:18.000Z
2017-08-18T00:00:00.000
{ "year": 2017, "sha1": "3089aff0f21a70c41c1c8c8c4a5d86f4aeec7ac5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-5368-6.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "3089aff0f21a70c41c1c8c8c4a5d86f4aeec7ac5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118742329
pes2o/s2orc
v3-fos-license
Phase Estimation of Coherent States with a Noiseless Linear Amplifier Amplification of quantum states is inevitably accompanied with the introduction of noise at the output. For protocols that are probabilistic with heralded success, noiseless linear amplification in theory may still possible. When the protocol is successful, it can lead to an output that is a noiselessly amplified copy of the input. When the protocol is unsuccessful, the output state is degraded and is usually discarded. Probabilistic protocols may improve the performance of some quantum information protocols, but not for metrology if the whole statistics is taken into consideration. We calculate the precision limits on estimating the phase of coherent states using a noiseless linear amplifier by computing its quantum Fisher information and we show that on average, the noiseless linear amplifier does not improve the phase estimate. We also discuss the case where abstention from measurement can reduce the cost for estimation. Amplification of quantum states is inevitably accompanied with the introduction of noise at the output. For protocols that are probabilistic with heralded success, noiseless linear amplification in theory may still possible. When the protocol is successful, it can lead to an output that is a noiselessly amplified copy of the input. When the protocol is unsuccessful, the output state is degraded and is usually discarded. Probabilistic protocols may improve the performance of some quantum information protocols, but not for metrology if the whole statistics is taken into consideration. We calculate the precision limits on estimating the phase of coherent states using a noiseless linear amplifier by computing its quantum Fisher information and we show that on average, the noiseless linear amplifier does not improve the phase estimate. We also discuss the case where abstention from measurement can reduce the cost for estimation. I. INTRODUCTION Quantum metrology is concerned with the measuring of a weak signal with the best achievable precision by using a quantum probe. One important example is in the detection of gravitational waves by measuring the phase difference of light. It would be beneficial if we could somehow amplify the signal prior to measurement. If the signal is encoded as the amplitude α of a coherent state |α , a noiseless linear amplifier (NLA) can do just that. An NLA with an amplification gain g > 1 transforms the coherent state |α to |gα [1], thereby amplifying the signal but not the noise. If this transformation can be performed deterministically, we would obtain a more precise estimate of the signal. Unfortunately, it is not possible to noiselessly amplify a quantum state [2]. But an approximate version of the NLA which works probabilistically is possible and has been realised by several experimental groups [3][4][5][6][7]. We investigate the precision of phase estimation of coherent states using a probabilistic NLA as shown schematically in Fig. 1. When the NLA successfully amplifies a coherent state, we are able to estimate the phase more precisely. However, when the amplification fails, we obtain a worse estimate of the phase than if we had not used the NLA. We show that on average, postselecting the successfully amplified events or using both successful and unsuccessful events of the NLA does not improve the precision of phase estimation. This is consistent with known results that by post-selecting based on the measurement outcomes, probabilistic metrology can result in improved quantum state estimation of the post-selected sub-ensemble [8][9][10], but on average postselection cannot increase information [11][12][13][14]. However with a different figure of merit, post-selection can help. This is the case for state discrimination when a cost is assigned to wrong guesses and for abstaining [15]. For our case, by assigning a cost to rejecting a state and a cost * E-mail: cqtsma@gmail.com for performing an estimator measurement, then by postselecting the successful outcome of the NLA and only performing the estimator measurement on these, we can achieve a desired precision at a lower cost. II. PHASE ESTIMATION To quantify the precision of an estimate, we shall use the quantum Fisher information [16][17][18][19]. Given a sample of m identical and independent states ρ θ that depend on some unknown parameter θ that we wish to estimate, the quantum Cramér Rao (QCR) bound states that the variance of an unbiased estimatorθ is bounded by where J (ρ θ ) is the quantum Fisher information The symmetric logarithmic derivative L is some Hermitian operator defined implicitly througḣ where an overdot is used to indicate a derivative with respect to θ. The QCR bound is asymptotically attainable when m 1 [20]. A large Fisher information allows for a more precise estimate of the parameter θ. Equivalently, a larger Fisher information allows a parameter θ to be estimated to the same precision from a smaller sample. For a pure state, ρ θ = |ψ θ ψ θ |, we haveρ θ = ρ θρθ +ρ θ ρ θ which indicates that we can take L = 2ρ θ . This gives We apply the above formalism to an NLA. The NLA we consider is implemented through a two outcome measurement device characterised by a gain g > 1 and maximum amplified photon n 0 ∈ Z + [22,23]. n 0 determines how closely the successfully amplified output from this device resembles the output from ideal NLA. A larger n 0 gives a more faithful approximation at the expense of a lower probability of success. The first measurement outcome corresponds to the operator which heralds a successful amplification event and projects the input state ρ θ to the state ρ s,θ = E s ρ θ E s /Tr ρ θ E 2 s . The successful amplification event occurs with probability p s = Tr ρ θ E 2 s . The second measurement outcome E f = 1 − E 2 s corresponds to a failed amplification event which projects the input state to ρ f,θ = E f ρ θ E f /Tr ρ θ E 2 f and occurs with probability p f = Tr ρ θ E 2 f . We assume that p s and p f do not depend on θ which is true for the state that we shall consider later. From the states ρ s,θ we can constructθ s , an estimator of θ, while from the states ρ f,θ , we construct a second estimatorθ f . Combining these two independent estimators, we arrive at a third estimator given bŷ where V s and V f denote the variances ofθ s andθ f , is chosen to minimise the variance ofθ NLA . The variances V s and V f depend on the number of successful and failed amplification events denoted by n s and n f respectively. Hence the weight β is also a function of number successfully amplified event n s . The variance of the estimator θ NLA given n s is using the notation J s = J (ρ s,θ ) and since n s does not depend on θ. For large m, n s /m → p s and n f /m → p f so that J NLA → p s J s + p f J f [14]. We consider a coherent input state ρ α = |α α| with α = re iθ , where the amplitude r is known and whose phase θ we wish to estimate. The quantum Fisher information for ρ α is J α = 4r 2 [24][25][26]. Applying the NLA on the state |α , we get one of the two outputs or with probabilities p s = exp −r 2 n0 n=0 g n r 2n n!g n0 + ∞ n=n0+1 r 2n n! In these case, the Fisher information is higher than the Fisher information without the NLA, Jα (green line). J f (red lines) is the Fisher information when the NLA failed to amplify the state. For these case, J f is lesser than Jα. J ideal (thick blue line) is the Fisher information of the state |gα that one will obtain from a successful NLA with a large n0. Input state has amplitude r = 0.25 and the Fisher information are normalised such that Jα = 1. that do not depend on θ. The probability of success and failure are plotted in Fig. 2 for r = 0.25. As n 0 increases, we get a better approximation to the ideal NLA transformation but at the expense of a lower probability of success. Differentiating the outputs, we get the unnormalised states with which we can compute J s and J f . We plot the Fisher information J α , J s and J f as a function of NLA gain in Fig. 3. The successfully amplified states |ψ s have higher Fisher information compared to the input coherent states, while the failure states |ψ f have a lower Fisher information. Hence, we can probabilistically get a higher information when the amplification succeed. For n 0 = 1, the states |ψ f carries no information about the phase θ. In Fig. 4, we plot the Fisher information scaled by their respective probabilities. We see that p s J s and p f J f are both lower than J α . Their sum J NLA , is also always lower than the Fisher information without using an NLA. This demonstrates the fact that doing a post-selection cannot increase information [11,13,14]. From Fig. 2, we see that when g increases, there is a much higher probability for the amplification to fail. For n 0 > 1, this results in more net information gained from the failed amplification events than the successfully amplified events at high g. In Fig. 5, we fix the NLA gain g = 2, and plot the Fisher information J NLA as a function of the fraction of successfully amplified states n s /m. We see that as n s increases, J NLA increases and eventually becomes larger than J α . However, the probability to get a large enough n s is small when the sample size m is large. For example, for m = 1000, we need n s > 89 before J NLA > J α . The probability for this is only 4.68%. The vertical line indicates the mean value of n s /m = p s . At this value, J NLA is less than J α . III. SIMULATIONS WITH FINITE SAMPLE For small θ and pure state ρ θ , the QCR bound can be attained by measuring the observable C = λ 2 L where L = 2 |ψ 0 ψ 0 | + |ψ 0 ψ 0 | has rank at most two and λ 2 = 1/ 4Tr ρ 0ρ 2 0 . The estimator obtained through C has moments Tr which verify that C is an unbiased estimator of θ achieving the QCR bound. The observable C has zero trace and spectral decomposition |c + λ c + | − |c − λ c − | where |c + and |c − are orthonormal vectors. Given m trials, the probability of obtaining n + positive outcomes and n − negative outcomes follows a multinomial distribution Pr (n + , n − ) = and p 0 = 1 − p − − p + . For coherent states without the NLA, λ α = 1/(2r) and is an optimal unbiased estimator of θ. For m measurements, the counts n α+ and n α− follows a multinomial distribution with m trials and event probabilities p α± = c α± |ρ θ |c α± . C α is the maximum likelihood estimator giving an estimate [27] θ α = n α+ − n α− n α+ + n α− λ α . The estimate obtained from the NLA can be viewed as an estimate obtained from a five outcome POVM The vectors |c s± and |c f± are the eigenvectors of the observable C s and C f with corresponding eigenvalues λ s and λ f for optimal estimation with the input states |ψ s and |ψ f . Given m measurements, the count rates n s± and n f± follows a multinomial distribution with m trials and event probabilities p s± = Tr ρ α E 2 s± and p f± = Tr ρ α E 2 f± . Given these counts, the maximum likelihood estimate for θ is constructed by [27] which is consistent with Eq. (5) and where n s = n s+ +n s− and n f = n f+ + n f− . The intermediate estimators arê θ s = (n s+ − n s− )λ s /n s andθ f = (n f+ − n f− )λ f /n f . We plot the precision of the estimatorsθ α andθ NLA defined by in Fig. 6, where the mean square error (MSE) of an estimatorθ is Here θ true is the true value of the parameter θ. From Fig. 6, we see that on average, the NLA does not increase the precision for phase estimation. IV. DISCUSSION The NLA is well suited for some tasks where all that matters are the successfully amplified states and when the probability of success does not matter, such as in probabilistic entanglement distillation and quantum key distribution [7]. In a phase estimation problem, if the figure of merit is the precision from a given number of sample, then as to be expected, using the NLA does not offer any advantage for phase estimation when compared to the optimal phase estimation scheme. However with different figure of merits, using an NLA and post-selecting only successfully amplified events can help in metrology. Suppose we associate a cost x for acquiring a sample, y for direct measurement of an estimator observable from each sample and z for applying a noiseless linear amplification on a sample, and our objective is to minimize the cost for obtaining an estimate for θ to a specified precision . In order to achieve the specified precision without using the NLA, we would need to perform an estimate on m α = /J α samples. The total cost is then (x + y)/J α . With the NLA, and performing an estimate only when the NLA heralds a successful amplification event, we now need to perform an estimate on only m s = /J s samples. Since J s > J α , each measurement gives more information and so we need less estimator measurements compared to estimating without the NLA. However the total number of samples we need to acquire increases because some samples were discarded when the NLA did not herald a successful amplification. We now need on average a total of m s /p s samples and the total cost of the estimate would be (x + z + p s y) / (p s J s ). In conventional metrology, the cost y assigned to measuring an estimator observable is zero, and since p s J s < J α , the cost from the post-selection strategy will always be higher than without using the NLA. In this case, post-selection does not help. However if y is non-zero, then the total cost of using the NLA and performing post-selection can be lesser than a direct measurement on all samples. This is true when In this case, the better strategy would be to abstain from measuring the sample whenever the NLA fails to amplify.
2016-12-01T03:42:03.000Z
2016-08-05T00:00:00.000
{ "year": 2016, "sha1": "184ae4259c39d004647f3b1f8feded36f895446a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1608.01777", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "184ae4259c39d004647f3b1f8feded36f895446a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
264140054
pes2o/s2orc
v3-fos-license
Anxiety, Depression, and Associated Factors among General Population in Indonesia during COVID-19 Pandemic: A Cross-Sectional Survey Introduction: The 2019 coronavirus pandemic (COVID-19) has affected the physical and mental health of individuals, families, and communities worldwide including Indonesia. This study aimed to examine anxiety and depression in the general population and factors related to anxiety and depression due to the COVID-19 pandemic. Methods: This study employed an online cross-sectional survey of 1149 respondents. We assessed self-reports regarding current health conditions and exposure to COVID-19, anxiety, and depression in the general population in Indonesia. Results: The results showed that 26.6% and 30.5% of the participants experienced mild to severe anxiety and depression, respectively. The ordinal regression test showed that anxiety in the community was significantly related to age, feeling infected with COVID-19, feeling that a friend/colleague is infected with COVID-19, sufficient information regarding COVID-19, and the types of symptoms that are felt (fever, cough, and cold/sore throat, difficulty breathing). Besides, education level, occupation, feeling that family is infected with COVID-19, symptoms experienced, and anxiety were significantly related to depression. Conclusion: The COVID-19 pandemic has caused anxiety and depression in the general population in Indonesia. This study’s results can be a catalyst in providing psychological interventions for the general public facing the COVID-19 pandemic. Introduction A series of unexplained pneumonia cases have been reported in Wuhan, China, since December 2019.The COVID-19 epidemic that emerged in Wuhan, China, has now spread worldwide.From 2020 until early 2022, Materials and Methods This research was a cross-sectional study conducted in 2021 using an online survey distributed via the WhatsApp platform.This study involved the general population in East Java, Indonesia, with a total population of 40.16 million.We used a convenience sampling method using inclusion criteria: 1) at least 17 years old, 2) not a health worker, and 3) filling out informed consent as research respondents.Respondents were excluded if they respondents did not complete the questionnaire.The data collected amounted to 1214 respondents, and then the researchers performed data cleaning and obtained 1149 respondents (5% data have been excluded related to filling in incorrect or incomplete data).The minimum amount of data required for a minimum sample (using the Slovin formula without a population size, with a proportion of 9.8% (Proportion of organic mental disorders based on Indonesian National Health Research 2018) and a confidence level of 95% obtained a minimum sample size of 136 people So it could be concluded that the data collected is sufficient for the number of samples needed to test the hypothesis. The questionnaire consists of demographic data, self-reports related to health conditions and exposure to COVID-19, and anxiety and depression problems.Demographic data includes age (years), gender (male or female), education (elementary school, primary school, high schools), and occupation (civil servants, private employees, self-employed, homemakers, not working, or others). The self-report section covers questions about current health conditions and exposure to COVID-19.The following questions determine current health conditions and exposure to COVID-19: Do you think you/your family/friends in your work environment are infected with COVID-19?Are you well-informed about COVID-19?Where is the information obtained?Are your activities disrupted due to COVID-19?Do you have any symptoms of COVID-19?Do you have a travel history to the affected areas in the last 14 days?And do you have any possible contacts for COVID-19? Generalized Anxiety Disorder (GAD-7) and Patient Health Questionnaire (PHQ-9) assessed the respondents' anxiety and depression.GAD-7 and PHQ-9 have good validity and reliability.GAD-7 is an evaluation of the severity of anxiety through self-assessment.The total score is categorized as follows: (0-4) minimal anxiety, (5-9) mild anxiety, (10-14) moderate anxiety, and (15-21) severe anxiety. 15PHQ-9 is an evaluation of the severity of depression through self-assessment.The total scores are categorized as follows: (0-4) minimal depression, (5-9) mild depression, (10-14) moderate depression, (15-19) moderately severe depression, and (20-27) severe depression. 16These categories are based on the scores assigned in the literature.The cutoff scores for detecting severe symptoms of anxiety and depression were 7 and 10.Respondents with scores more significant than the threshold were characterized as having severe symptoms.All questionnaire in this study was written in Indonesian versions.The GAD-7 questionnaire passed the validity and reliability test from the previous research with a correlation coefficient of 0.64 to 0.80, and Cronbach alpha was 0.86. 17At the same time, the validity and reliability of the PHQ-9 questionnaire were indicated by the correlation coefficient of 0.52 and Cronbach's alpha of 0.88, respectively. 18escriptive analysis was employed to calculate demographic characteristics, self-report health conditions, anxiety, and depression.This analysis also used frequency distribution for categorical variables, while the mean and standard deviation were deployed for numeric variables.Bivariate analysis using Spearman rank analysis and contingency coefficient was used to see the correlation between variables.Correlation analysis is considered to be significantly related if P < 0.05.The multivariate analysis employed the ordinal regression analysis to assess factors associated with anxiety and depression in the community during COVID-19.Data analysis was then performed using the statistical program. This study has accepted permission from the Health Research Ethics Commission of the Faculty of Medicine Universitas Brawijaya Malang Number 243/EC// KEPK/08/2021.Respondents have explained the purposes, advantages, and disadvantages that might be experienced in the study process before participating.Respondents who participated in the study signed informed consent.Respondents' participation in this research was voluntary. Results The finding of this study showed that among the 1149 respondents, most were 25 years old, with the gender of 917 respondents (79.8%) being women.Most respondents (52.8%) have intermediate (primary schooling) education and 73.3% do not work.In the self-report section, 13.1% of respondents contended that they might be infected with COVID-19, and 1.5% argued that they were infected with COVID-19.As many as 7.2% of respondents stated that their family was likely infected, and 0.3% were infected.Meanwhile, 19.7% of respondents confessed that it was likely that their coworkers were infected, and 3% of them were infected with the virus.Most of the respondents (94%) contended that they obtained enough information related to COVID-19, and the remaining 5.7% of respondents said that they did not get enough information, with 85.5% of respondents searching for sources of information from the internet.Most of them (82.2%)stated that this pandemic disrupted their work.Some people experienced fever symptoms (2.2%), cough and cold or sore throat (12.1%), difficulty breathing (0.8%), and felt the two signs of symptoms above 3.8% of respondents.Most of the 98.3% of respondents did not travel to areas affected by COVID-19, and 93% did not have a history of contact with respondents exposed to COVID-19 (Table 1). The result of bivariate analysis indicated a significant relationship between age and disturbed activities with anxiety during COVID-19, with P values of 0.036 and 0.021, respectively.Also, self-reporting regarding feelings of being infected with COVID-19 in oneself, family, and friends in a work environment and the appearance of physical symptoms of being infected with COVID-19 associated with anxiety symptoms (P value: 0.00).Regarding depression, age, work status, receiving information related to COVID-19, and feeling that their activities are disturbed due to COVID-19 were significantly correlated with depression during COVID-19 with a P value of 0.002, 0.001, 0.03, and 0.004.Similar to anxiety, self-report related to feelings of being infected with COVID-19 in oneself, family, friends in a work environment and the appearance of physical symptoms of being infected with COVID-19 related to depression (P value: 0.00).In conclusion, there was a significant relationship between anxiety and depression due to COVID-19, as indicated by a P value of 0.00. Based on the regression analysis in Table 2, it can be seen that variables related to anxiety include: age (higher age tends to experience lower anxiety than younger ages), feeling infected with COVID-19 (someone who has known that they are infected will experience higher anxiety than those who do not know they have been infected), feel that a friend/coworker is infected with COVID-19 (someone who has known that a friend/coworker has been infected will experience higher anxiety than those who do not know that they are infected), sufficient information regarding COVID-19 (insufficient information related to COVID-19 tends to experiencing higher anxiety than someone who has adequate information), the origin of information associated with COVID-19 (someone who gets information from television and institutions regarding information related to COVID-19, then that person will experience higher anxiety), and the type of symptoms felt (someone who has symptoms of fever/ history of fever, cough cold/sore throat, and shortness of breath/difficulty breathing, will have higher anxiety than those without symptoms).Table 3 describes the coefficient of determination of the model.The Nagelkerke score showed a coefficient of determination of 0.129, which means that the independent variable can explain the dependent variable by 12.9%.In comparison, 87.1% was influenced by other factors not included in model testing. In the independent variable, the variable related to depression included education level (low levels of education tend to have a higher depression than higher levels of education), employment status (someone who works will tend to experience more severe depression than those who do not work), feeling family infected with COVID-19 (someone who has known the family has been infected will experience higher depression than those who do not know they have been infected), the type of symptoms experienced (someone who has fever symptoms/history of fever, cough, and cold/sore throat, and shortness of breath/difficulty breathing, will experience depression which is higher than without symptoms), and anxiety (someone who has a high level of anxiety tends to experience more severe depression) (Table 4).In the depression, the Nagelkerke test indicated a coefficient of determination of 0.43, which means that the independent variable can explain the dependent variable by 43.7%.In comparison, 56.3% is influenced by other factors not included in the test model. Discussion The present study reports a significant relationship between age and anxiety during COVID-19.The general population at a younger age tends to experience anxiety.This study's results align with research conducted by Bolarinwa et al 19 showing that at the age of fewer than 40 years, more than 60% experienced stress during the COVID-19 pandemic.People experience anxiety because they are worried about being exposed to the virus during a pandemic. 20Young people around 20 years of age feel worried and have trouble sleeping due to accessing social media information that cannot be verified. 21In young people, stress, anxiety, and depression were the most common mental health issues during the pandemic. 22roductive age and lack of social support, family, and medical personnel are at risk of causing anxiety, depression, and sleep disorders during a pandemic. 23his study's findings also indicate a significant relationship between feeling infected with COVID-19 in oneself, friends and the appearance of physical symptoms with anxiety.During COVID-19, about 50%, over 60%, and 80% of women worry about their health and their children and relatives. 24This means that someone who knows they are infected will feel more anxious.Someone who experiences symptoms or has experience of being exposed to COVID-19 from friends in the office will feel more anxious. 25A person becomes worried when the possibility of contact with an infected coworker is then at risk of transmitting to the child and family. 26Anxiety related to infection and the impact of COVID-19 can cause changes in behavior to panic, causing emotional distress and social disorders. 27They tried to prevent this by choosing to live separately from their families, but feelings of anxiety increased. 26Living apart from loved ones for a long time causes psychological disorders and the risk of psychiatric problems. 28ufficient information regarding COVID-19 is significantly related to self-anxiety.Research González-Sanguino et al 29 showed that receiving sufficient information regarding COVID-19 is a protective factor for anxiety symptoms.Someone who is informed and knowledgeable will create a high awareness of infection control measures and become anxious when there is insufficient information, 21 even though exposure to excessive information from various unclear social media can increase stress and anxiety. 30Anxiety occurs when there are infodemics in fake news, conspiracy theories, and drugs that can heal instantly. 31Thus, it is imperative to inform the public to access official health websites belonging to state and international health agencies. 32This information is necessary so that regulations related to its circulation are needed, including WHO's cooperation with various social media. 33Most people get information from television regarding accurate details pertaining to COVID-19 and infection prevention strategies. 30The public also seeks information while undergoing social restrictions during the pandemic through electronic media with internet connectivity. 31ccupation also has a significant relationship with anxiety due to COVID-19.One's job is related to the economy, during a pandemic, economic change and increasing crime rates make it even more worrying. 20conomic anxiety is also more common in young adults than in older adults. 34Anxiety over the financial crisis occurs when social distancing, self-isolation, and travel restrictions have resulted in a reduced labor force in all sectors of the economy and caused many people to lose their jobs. 35Greater job insecurity is indirectly associated with more significant anxiety symptoms due to more substantial financial problems. 36The pandemic requires individuals to regulate social distancing, travel bans, cancellation of sporting events, and changes in work practices to affect everyday life. 37n terms of depression, this study showed that there was a significant relationship between age and depression due to COVID-19.Young adults ( < 35 years), women and the unemployed will feel more burdened, so they are more at risk of experiencing depression. 38Young people with poor economic conditions and are required to stay at home during a pandemic are at increased risk of experiencing depression due to worries about the future. 39Young people have less effective emotional regulation, cognitive abilities, maladaptive coping, and lack of social support. 39sychosomatic symptoms such as insomnia, anxiety, feelings of loneliness, and depression are common. 40uarantine is a lousy experience for society, even though it benefits health if appropriately implemented. 41The rapid spread of infection and the high mortality rate cause anxiety, depression, and stress in the community. 42Thus, the importance of social support for emotional well-being, reducing anxiety and depression at the age of 18-34, groups of adolescents, students, workers, and housewives. 43ducation has a significant relationship with depression.A relatively low educational background has a higher risk of experiencing anxiety or depression. 44Lack of knowledge about diseases and easily feeling helpless in the face of a pandemic causes psychological disorders. 45Higher education has better understanding and awareness, so it tends to reduce anxiety and depression. 46The highest level of education in the community is high school, but in addition to formal education, physical health education related to prevention and psychology for the community can reduce anxiety and depression. 47The level of education is associated with the physiological function and optimism of a person who is more aware of health, including knowledge, belief, service utilization, and good health behavior. 48ccupation has a significant relationship with depression.This is due to the disruption of activities and work due to COVID-19.A person feels pressured because long-term work and activities cannot be planned and cause financial losses. 41Economic problems such as receiving financial support during a pandemic, depending on family for living, and low income were associated with higher depression symptoms. 49Each country implements a lockdown that impacts people's livelihoods, increasing psychological morbidity.People who are infected and undergo isolation treatment lose their jobs and are afraid of being discriminated against, causing depression. 50ecreasing household income causes an increased risk of mental disorders. 51ere is a significant relationship between self-reporting related to signs of being infected with COVID-19 and depression.Psychological problems such as depression impact physical and emotional exhaustion for a long time, tormenting, reducing income, and social stigma. 41 person undergoing quarantine during a pandemic has a fear of infecting other people, a fear of his illness, and limitations to meet the family. 52Some people who experience symptoms of physical changes such as fever, sore throat, cough with phlegm, chills, high blood pressure, and muscle aches feel worried about being infected with COVID-19. 25Feelings of fear, worry, depression, and depression occur because of prior contact with a patient diagnosed with COVID-19. 25ased on the research results, it can be concluded that there is a significant relationship between anxiety and depression due to COVID-19.The emergence of feelings of loneliness and anxiety due to social restrictions are the main risk factors for depression. 53The groups most affected psychologically are women, individuals with a history of psychiatric illness, living in cities, and having chronic diseases. 54A person's health behavior can be affected by depression and anxiety during a pandemic. 54sychological pressure is also felt by students, education, hospital patients, and health workers. 55The general public feels anxious and depressed because of social restrictions during the early pandemic. 41Depression occurs because of feeling the burden of fear of transmitting to others, changes in living conditions, and increasing the number of infected patients. 56herefore, it is crucial to identify anxiety and depression and the related factors due to them having worse consequences on mental health, such as suicidal behavior. 57lthough the present study employed a valid instrument for data gathering, the limitation of this study was the recruitment process.Approximately 5% of the respondents did not complete the questionnaire fully since it was an online survey.Therefore, future research is encouraged to consider a more effective respondent recruitment process.In addition, further research can explore the promised interventions to reduce psychological issues, especially anxiety and depression, among the Indonesian community.The development of effective intervention is essential because, based on these results research, 26.6% and 30.5% of respondents experienced mild to severe anxiety and depression, respectively. Conclusion The COVID-19 pandemic has created a worldwide health crisis due to rapid transmission and leading to death.Changes in various life arrangements are treated as an effort to prevent and reduce transmission.This condition causes the community to experience psychological disorders such as anxiety and depression.The anxiety in society is related to age, feeling infected by the virus in themselves and friends in the work environment, adequate information and origin of information related to COVID-19, and the appearance of physical symptoms in individuals.Meanwhile, depression in society is related to the level of education, employment status, the types of symptoms experienced, and anxiety that occurred during the COVID-19 pandemic.This study's results can be considered to assist in the form of psychological services for the general public.These psychological services can be in the forms of relevant information media, face-to-face assistance, or online media as an effort to reduce anxiety and depression in the community so that it can improve the quality of care and prevent mental health problems for the general public in Indonesia who are currently facing the COVID-19 pandemic. 19 0 History of exposure related to COVID-19 History of close contact with confirmed cases of COVID- Table 1 . Demography and bivariate analysis of each variable (n = 1149) Table 3 . Test the coefficient of determination for the model for anxiety and depression
2023-10-16T15:07:08.864Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "080c9bf0bafe65b5aa32cb4d5f9a8dd8dfb70593", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "6f583fb884521f8f8e885178b55120ae7435002d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
15641545
pes2o/s2orc
v3-fos-license
Histocompatibility and Immunogenetics in Cord Blood Transplantation This review of the immunogenetics of cord blood transplantation attempts to highlight the connections between classical studies and conclusions of the tissue transplantation field as a scholarly endeavor, exemplified by the work of Professor Hoecker, with the motivations and some recent and key results of clinical cord blood transplantation. The authors review the evolution of understanding of transplantation biology and find that the results of the application of cord blood stem cells to Transplantation Medicine are consistent with the careful experiments of the pioneers in the field, from the results of tumor and normal tissue transplants, histocompatibility immunogenetics, to cell and molecular biology. Recent results of the National Cord Blood Program of the New York Blood Center describe the functioning in cord blood transplantation of factors, well known in transplantation immunogenetics, like the F1 anti-parent effect and the tolerance-like status of donors produced by non-inherited maternal HLA antigens. Consideration of these factors in donor selection strategies can improve the prognosis of transplantation by characterizing “permissibility” in HLAincompatible transplantation thereby increasing the probability of survival and reducing the likelihood of leukemic relapse. Key terms: Umbilical Cord Blood, Hematopoietic Transplantation, Histocompatibility Genes, HLA matching. Received: January 14, 2010. In revised form: May 19, 2010. Accepted: August 11, 2010 INTRODUCTION Tissue (mostly bone marrow) and solid organ transplantation (kidney, liver, heart) are well-established therapeutic technologies in current medical practice and are performed on many thousands of patients every year. Cord blood allotransplantation, more recently introduced to clinical medicine, has brought about a broad change in the attitudes of transplant physicians, mostly because HLAmismatched unrelated donor cord blood can achieve engraftment with usually manageable levels of graft versushost disease. Logistic advantages, such as donation free of risks, possibility of long term storage without inventory attrition, low prevalence of latent viral disease and prompt, safe availability, enhance its practical attractiveness. Yet, the immunogenetic properties of cord blood are predictable from its cellular composition and age in the context of an evolving histocompatibility system and are not surprising in the context of the evolving understanding of that system’s biology. Although the broad immunologic bases of allotransplantation are reasonably well understood, important problems still remain to be solved properly. In this brief review of cord blood as source of hematopoietic tissue for bone marrow replacement (Rubinstein, 2005) and its immunogenetic underpinnings, we will examine the biological background of transplantation immunogenetics in general and some current challenges in overcoming the clinical problems of cord blood transplantation. Early studies. The discovery of a genetic basis for the compatibility of tissue transplants was demonstrated by Little (1914) and quantitatively explored by Snell (1948), using transplantable tumors in inbred mouse strains. These Bar Harbor investigators disclosed the existence in mice of many genes whose shared presence in graft donors and recipients is required for the acceptance of tissue transplants. Snell showed that these genetic systems (he termed them “Histocompatibility” or “H” genes) segregated as Mendelian traits, independently from each other and that they differed in “strength”, with one H gene, later designated as H-2 (see below) being much stronger than any of the others. That an immune mechanism is responsible for the rejection of transplants, is, however, due to Gorer, who demonstrated the formation of antibodies with specificity for the donor’s tissues (including, in mice, the erythrocytes) following allogeneic and xenogeneic transplantation (into rabbits) (1936, 1938). Gorer encountered several red cell-agglutinating alloand hetero-antibodies, whose reactivity with the red cells of mice of different strains allowed him to designate two of the corresponding antigens as “antigen l” and “antigen ll”. With genetic back-crosses between three inbred mouse strains, Gorer demonstrated that tumors of a strain carrying antigen ll would grow only in back-cross animals carrying antigen II. He demonstrated that anti-II hemagglutinins were produced by mice rejecting A strain tumors (antigen ll-positive) and that cells from A-strain tumors specifically absorb such antiantigen II allo-hemagglutinins. In 1938, Gorer rephrased the genetic “laws” of transplantation in immunological terms: “Normal and neoplastic tissues contain isoantigenic factors that are genetically determined. Isoantigenic factors present in the grafted tissue and absent in the host are capable of eliciting a response that results in the destruction of the graft”. He added that, “Under special circumstance the response may not be elicited or grafted tissues may not be destroyed thereby”. Thus, the recognition of genetic differences in transplantation operated through the immunological recognition of the products of alleles at certainly one and possibly, several unlinked H genetic loci. In joint experiments reported in 1948, Gorer, Snell, et al. demonstrated that antigen II is one of the product(s) of the “strong H” (H-2) locus and uncovered the linkage of the H-2 genetic determinants to the Fused locus in the 9th linkage group of the mouse. Snell continued his investigation of this linkage group and discovered the existence in it of unrelated genetic loci determining polymorphic skeletal defects. RUBINSTEIN & HILLYER Biol Res 43, 2010, 339-345 340 Shortly after this seminal collaboration, Gustavo Hoecker, then a young Chilean investigator, joined Gorer’s Laboratory at Guy’s Hospital in London with his wife, Dr. Olga Pizarro, and a year later, Snell’s Laboratory in Bar Harbor. He then returned with Promethean offshoots (antibodies and breeding trios of congeneic mice) from their research to start experimental immunogenetic work at the Universidad de Chile’s Medical School as Chair of Biology and Genetics. Using Snell’s congenic strains of mice (inbred lines that carried different H-2 alleles in a common genetic background) Hoecker was one of the first scientists to unravel the genetic and immunologic complexity of the H-2 gene. Thereafter, his was a life-long effort to understand the physiology of H-2 immunogenetic determinants and their biological role (Hoecker, 1986). He used classical and innovative tools, including immunization of F1-hybrid congenic animals and sequential absorption experiments, to make allo-antibodies of ever more restricted specificity. He was able in that way to describe the extended immune phenotypes of many H-2 alleles and the existence of distinct H-2 loci, by demonstrating crossing-over within H-2. Hoecker’s serological work achieved great accuracy, earning him an admirable international reputation for reliability and helped pave the way to the forthcoming development of understanding in the field of human histocompatibility antigens and clinical transplantation. He also pioneered efforts to understand the role of H antigens in Medawar’s Actively Acquired Tolerance by neonates (Billingham et al., 1953) by investigating their ontogeny, in the late fifties. Together with Olga Pizarro, he demonstrated that the H-2 antigens acquired full phenotypic expression late in intrauterine life, at, or within a few days of birth, depending on the strain (Pizarro et al, 1961; Hoecker and Pizarro, 1961). The nature of the relationship between delayed emergence of the H-2 antigenic expression and immunological tolerance was not, however, precisely established. In the case of hematopoietic tissue transplantation, in addition to the problem of incompatible graft rejection, donor tissue regularly carries reactive cells of the immune system. Such cells produce the syndrome designated by Morten Simonsen as “secondary” or “runt” disease (now, graft-versus-host disease) in recipients of allogeneic hematopoietic tissue (Simonsen, 1957). Interestingly, as also shown by DHW Barnes et al . , 1958, F1 hybrid animals given adult hematopoietic transplants from either parental strain, regularly suffer a severe or fatal graft-versus-host reaction, but, in their words, “The reduction and attenuation of “secondary disease” which follows the use of foetal myeloid tissue is thought to be due to the acquisition of partial tolerance, by the maturing foetal cells, for host antigens”. Our own interest in cord blood transplantation, thus, stems from the report by Billingham et al., 1953, on acquired tolerance and from additional data reported by Simonsen (1957), DHW Barnes et al. (1958) and Hasek, et al. (1961). Hoecker’s work was, in several ways, very much in the mainstream of the research that led to the discovery of the Major Histocompatibility System of humans (van Rood, 1958; Payne and Rolfe, 1958; Dausset, 1959) now HLA (Human Leukocyte Antigen) and the enormous expansion of clinical transplantation, although this application was not, admittedly, his primary goal. In particular, his work antedated the extensive polymorphism of the HLA antigens in humans and the realization that, just as in the mouse, HLA antigens are encoded by closely linked genes underlying different functions related to epitope presentation and activation of immune cells (reviewed by Bodmer, 1987). Bodmer (1987) and Benacerraf (1971) saw the complexity of MHC genes and their products as providing molecular clues for the study of the genetic control of immune responses. Initially, this study used chemically defined haptens and progressively more physiologic antigenic groups and permitted approaching the causes for the association of certain diseases with the presence or absence of specific HLA antigens. Thus, the relation of H-2 and HLA to transplant acceptance or r INTRODUCTION Tissue (mostly bone marrow) and solid organ transplantation (kidney, liver, heart) are well-established therapeutic technologies in current medical practice and are performed on many thousands of patients every year.Cord blood allotransplantation, more recently introduced to clinical medicine, has brought about a broad change in the attitudes of transplant physicians, mostly because HLAmismatched unrelated donor cord blood can achieve engraftment with usually manageable levels of graft versushost disease.Logistic advantages, such as donation free of risks, possibility of long term storage without inventory attrition, low prevalence of latent viral disease and prompt, safe availability, enhance its practical attractiveness.Yet, the immunogenetic properties of cord blood are predictable from its cellular composition and age in the context of an evolving histocompatibility system and are not surprising in the context of the evolving understanding of that system's biology.Although the broad immunologic bases of allotransplantation are reasonably well understood, important problems still remain to be solved properly.In this brief review of cord blood as source of hematopoietic tissue for bone marrow replacement (Rubinstein, 2005) and its immunogenetic underpinnings, we will examine the biological background of transplantation immunogenetics in general and some current challenges in overcoming the clinical problems of cord blood transplantation. Early studies.The discovery of a genetic basis for the compatibility of tissue transplants was demonstrated by Little (1914) and quantitatively explored by Snell (1948), using transplantable tumors in inbred mouse strains.These Bar Harbor investigators disclosed the existence in mice of many genes whose shared presence in graft donors and recipients is required for the acceptance of tissue transplants.Snell showed that these genetic systems (he termed them "Histocompatibility" or "H" genes) segregated as Mendelian traits, independently from each other and that they differed in "strength", with one H gene, later designated as H-2 (see below) being much stronger than any of the others.That an immune mechanism is responsible for the rejection of transplants, is, however, due to Gorer, who demonstrated the formation of antibodies with specificity for the donor's tissues (including, in mice, the erythrocytes) following allogeneic and xenogeneic transplantation (into rabbits) (1936,1938). Gorer encountered several red cell-agglutinating alloand hetero-antibodies, whose reactivity with the red cells of mice of different strains allowed him to designate two of the corresponding antigens as "antigen l" and "antigen ll".With genetic back-crosses between three inbred mouse strains, Gorer demonstrated that tumors of a strain carrying antigen ll would grow only in back-cross animals carrying antigen II.He demonstrated that anti-II hemagglutinins were produced by mice rejecting A strain tumors (antigen ll-positive) and that cells from A-strain tumors specifically absorb such antiantigen II allo-hemagglutinins.In 1938, Gorer rephrased the genetic "laws" of transplantation in immunological terms: "Normal and neoplastic tissues contain isoantigenic factors that are genetically determined.Isoantigenic factors present in the grafted tissue and absent in the host are capable of eliciting a response that results in the destruction of the graft".He added that, "Under special circumstance the response may not be elicited or grafted tissues may not be destroyed thereby".Thus, the recognition of genetic differences in transplantation operated through the immunological recognition of the products of alleles at certainly one and possibly, several unlinked H genetic loci. In joint experiments reported in 1948, Gorer, Snell, et al. demonstrated that antigen II is one of the product(s) of the "strong H" (H-2) locus and uncovered the linkage of the H-2 genetic determinants to the Fused locus in the 9 th linkage group of the mouse.Snell continued his investigation of this linkage group and discovered the existence in it of unrelated genetic loci determining polymorphic skeletal defects. Shortly after this seminal collaboration, Gustavo Hoecker, then a young Chilean investigator, joined Gorer's Laboratory at Guy's Hospital in London with his wife, Dr. Olga Pizarro, and a year later, Snell's Laboratory in Bar Harbor.He then returned with Promethean offshoots (antibodies and breeding trios of congeneic mice) from their research to start experimental immunogenetic work at the Universidad de Chile's Medical School as Chair of Biology and Genetics.Using Snell's congenic strains of mice (inbred lines that carried different H-2 alleles in a common genetic background) Hoecker was one of the first scientists to unravel the genetic and immunologic complexity of the H-2 gene.Thereafter, his was a life-long effort to understand the physiology of H-2 immunogenetic determinants and their biological role (Hoecker, 1986).He used classical and innovative tools, including immunization of F1-hybrid congenic animals and sequential absorption experiments, to make allo-antibodies of ever more restricted specificity.He was able in that way to describe the extended immune phenotypes of many H-2 alleles and the existence of distinct H-2 loci, by demonstrating crossing-over within H-2.Hoecker's serological work achieved great accuracy, earning him an admirable international reputation for reliability and helped pave the way to the forthcoming development of understanding in the field of human histocompatibility antigens and clinical transplantation.He also pioneered efforts to understand the role of H antigens in Medawar's Actively Acquired Tolerance by neonates (Billingham et al., 1953) by investigating their ontogeny, in the late fifties.Together with Olga Pizarro, he demonstrated that the H-2 antigens acquired full phenotypic expression late in intrauterine life, at, or within a few days of birth, depending on the strain (Pizarro et al, 1961;Hoecker and Pizarro, 1961).The nature of the relationship between delayed emergence of the H-2 antigenic expression and immunological tolerance was not, however, precisely established.In the case of hematopoietic tissue transplantation, in addition to the problem of incompatible graft rejection, donor tissue regularly carries reactive cells of the immune system.Such cells produce the syndrome designated by Morten Simonsen as "secondary" or "runt" disease (now, graft-versus-host disease) in recipients of allogeneic hematopoietic tissue (Simonsen, 1957).Interestingly, as also shown by DHW Barnes et al., 1958, F1 hybrid animals given adult hematopoietic transplants from either parental strain, regularly suffer a severe or fatal graft-versus-host reaction, but, in their words, "The reduction and attenuation of "secondary disease" which follows the use of foetal myeloid tissue is thought to be due to the acquisition of partial tolerance, by the maturing foetal cells, for host antigens".Our own interest in cord blood transplantation, thus, stems from the report by Billingham et al., 1953, on acquired tolerance and from additional data reported by Simonsen (1957), DHW Barnes et al. (1958) andHasek, et al. (1961). Hoecker's work was, in several ways, very much in the mainstream of the research that led to the discovery of the Major Histocompatibility System of humans (van Rood, 1958;Payne and Rolfe, 1958;Dausset, 1959) now HLA (Human Leukocyte Antigen) and the enormous expansion of clinical transplantation, although this application was not, admittedly, his primary goal.In particular, his work antedated the extensive polymorphism of the HLA antigens in humans and the realization that, just as in the mouse, HLA antigens are encoded by closely linked genes underlying different functions related to epitope presentation and activation of immune cells (reviewed by Bodmer, 1987).Bodmer (1987) and Benacerraf (1971) saw the complexity of MHC genes and their products as providing molecular clues for the study of the genetic control of immune responses.Initially, this study used chemically defined haptens and progressively more physiologic antigenic groups and permitted approaching the causes for the association of certain diseases with the presence or absence of specific HLA antigens. Thus, the relation of H-2 and HLA to transplant acceptance or rejection is not only that of being molecular targets for tissue-rejecting cytotoxic alloantibodies and cells, but of presenting diverse kinds of epitopes (including those of alloantigens) for recognition by and clonal expansion of, previously naïve cells of the immune system.The major histocompatibility system is thus not only involved in transplantation, but functions as a major immunological control agency, in Bodmer's words, "a super supergene" (Bodmer, 1978). CLINICAL TRANSPLANTATION HLA-matched transplantation has led to solutions for previously intractable medical problems such as the terminal failure of solid organs (kidneys, hearts, livers and lungs) and also of bone marrow.As stated above, because of the extreme polymorphism of HLA loci, which is also ethnically stratified, finding matched donors for patients who need bone marrow transplants has become a worldwide effort.Vast national and international collaborations have permitted finding suitable donors for many patients even for some who have HLA types of vanishingly low frequencies (van Rood, 2007).In the solid organ field the problem exists too.The problem here is organizing a search for donors (largely cadaveric) and using HLA as a fast tool to find preregistered patients that are matched to them.Originally proposed by van Rood (1967), international cooperation, in multiple ways, has become a necessary and standard tool.For patients with rarer HLA types, hierarchical rules (waiting lists) for access to scarce donors have been set up. (Such information is available in the Web sites of Transplant sharing organizations, such as UNOS, the Scientific Registry of Transplant Recipients, and the Organ Procurement and Transplantation Network). Bone marrow transplantation is a particularly complicated form of transplantation because, in addition to the usual risk of (host vs.) graft rejection, the immune attack is potentially bilateral.Graft vs. Host reactivity underlies a very serious risk of graft versus host disease (GvHD), which, clinical practice shows, is a more frequent complication of bone marrow allografts than rejection, even with modern immunosuppression and GvHD prophylaxis.Thus, Registries of volunteer bone marrow donors, currently have over 14 million volunteer potential donors (a little less than the populations of Holland, or Chile, each about 16.5 million inhabitants in 2008) and donor recruitment continues.Despite these huge numbers and the fact that sibling donors (25% of whom are HLA-identical-by-descent) and other family members who are HLA-matched (although not identical-by-descent) contribute importantly to the provision of well matched grafts, many patients in need of bone marrow grafts still cannot find an HLA-matched donor.Almost always, hematopoietic stem cell grafts are needed for patients with very poor prognosis (typically, with malignancies, but also with genetic conditions, like Fanconi anemia and some metabolic diseases, or acquired marrow insufficiencies, such as severe aplastic anemia).Therefore, most of these patients will die unless a suitable donor is found within a reasonably short time (a few months, or weeks, in some cases).Cord blood grafts, frozen and available "off the shelf" are importantly advantageous. NIMA (maternal non-inherited HLA antigens): a form of Acquired Tolerance?The donor search might be facilitated by the observation that a substantial fraction of kidney patients fail to make antibodies against their maternal HLA antigens, not only those inherited, but, in a surprisingly high proportion of cases, the non-inherited as well.That HLA NIMAs can "condition" some patients so that they tolerate their presence in an incompatible kidney graft, was described by Claas et al., 1988.Because polytransfused patients tend to become highly sensitized by blood leukocytes and may develop antibodies reactive with many HLA alloantigens, finding a cross-match-negative kidney donor may be extremely difficult.The survey by Claas et al. (1988) showed that 50 percent of highly sensitized patients were cross-match-negative (did not form antibodies) against their NIMAs (non-inherited maternal HLA antigens). Apart from the obvious clinical implications regarding donor acceptability, Claas' data suggest that a long-lived human counterpart of murine neonatal tolerance may have been disclosed.Such a possibility has also been investigated in bone marrow transplantation and in regard to basic immunobiological mechanisms (van Rood and Class, 1990;van Rood et al., 2002;Burlingham, 2009;van Rood et al., 2005;Mold et al., 2008). Umbilical cord blood transplantation: The practical utilization of cord blood as donor tissue for HLA matched or mismatched unrelated recipients (Rubinstein et al., 1993, Rubinstein, 2005;Locatelli et al., 2003;Querol et al., 2009) has grown in popularity and currently constitutes a substantial and growing fraction of all marrow transplantation (World Marrow Donor Association, 2009). Cord blood grafts do have shortcomings, among which low total numbers of cells is the most difficult to overcome.In general, cord blood transplants provide 1/10 the total nucleated cell (TNC) dose of a usual bone marrow graft, except in small children.In comparison with bone marrow transplants, those of cord blood display delayed engraftment, increased probability of early graft failures and higher short-term transplant-related mortality (Rubinstein et al., 1998).However, the probability of severe GvHD is less and this accounts for a higher overall survival for patients at three years post-transplantation, particularly when the cord blood transplants do not present more than 2 HLA antigen mismatches (Eapen et al., 2007).Other advantages include the immediate availability of the frozen cell grafts when needed and a lower probability of transmission of latent infectious organisms (Rubinstein, 2005). Most likely, 18,000 unrelated cord blood transplants have already been done worldwide, some 3,500 provided by the National Cord Blood Program of the New York Blood Center as of March, 2010.Outcome data covering three years post-transplant is available in over 90% of cases transplanted with National Cord Blood Program grafts through 2006.These grafts were performed in over 100 different Transplant Centers worldwide since 1993, for different diseases, using different conditioning regimens and GvHD prophylactic routines and with donor selection schemes that varied in the relative importance assigned to histocompatibility and cell dose, the two most important donor variables (Rubinstein et al., 1998;Eapen et al., 2007;Querol et al., 2009;Barker et al., 2010).It is important to note that the degree of HLA typing resolution required in the case of bone marrow transplantation is full allele-level matching for HLA-A, -B, -C and -DR, while for cord blood transplantation matching HLA-A and -B matching only at antigen-level resolution and HLA-DR at allele-level remains the current practice.The lower resolution required for HLA-A and -B and the fact that HLA-C matching has not yet been found to influence the outcome of cord blood transplants, greatly reduces the level of polymorphism currently to be considered in cord blood transplantation.Hence, "suitable" unrelated donors are routinely obtained more consistently than with bone marrow (despite the much lower number of typed donors available).Because of the interactions between the independent factors, histocompatibility and cell dose and their combined influence on Transplant-Related Mortality (TRM), our data are shown in Table 1 separated into sets defined by both, and thus split among groups with low, medium and high TNC doses and either 1 or 2 antigen mismatches, considering the three main HLA loci (HLA-A, -B and -DR).Our main conclusions from the data are: 1.We confirm that both the HLA match grade and the cell dose are separately associated with TRM in Cord Blood Transplantation.2. Two HLA mismatches are associated with increased TRM compared to one, (RR = 1.9 vs. 1.7) independently of the cell dose.3.In the presence of two HLA mismatches, graft TNC doses < 4.9 X 10 7 /Kg are associated with significantly higher TRM and should not be preferred when a better combination of match and cell dose is available.4. A better HLA match and a higher cell dose reduce TRM rates, compared to lesser ones, for patients with HLA mismatches. In clinical comparisons with other sources of hematopoietic stem cells, cord blood does quite well.In a recent study in children with acute leukemia (in cooperation with the International Bone Marrow Transplant Registry) (Eapen et al., 2007) we found that recipients of fully matched cord blood transplants had a lower incidence of TRM than recipients of bone marrow fully matched at allele level: relative risk = 0.26 (2/26 vs. 24/116), as well as lower relapse: relative risk = 0.68 (11/35 vs. 45/116) and fewer treatment failures: relative risk = 0.67 (13/35 vs. 45/116).Five-year leukemia-free survivals were: for allele-level fully-matched bone marrow, 38%; while for matched cord blood: 60% and for 1 antigen mismatched cord blood with more than 5 X 10 7 nucleated cells, 45%.Because of the relatively low numbers of cord blood recipients of well-matched and high-cell dose grafts and the multiple comparisons tested, the results of these multivariate comparisons were not statistically significant. Data in adults, from both our Program together with IBMTR (Laughlin et al., 2004) and from Eurocord (Rocha et al., 2004) indicate that the overall differences between the outcomes from cord blood and bone marrow transplantations are becoming smaller and that long term survival is improving markedly for both types of grafts recently, in part due to improved HLA typing methodology but also from other, still not well defined, causes. HLA and the F1 effect (unidirectional mismatches): Current efforts to further improve the stem-cell transplantation of patients with these dread diseases, include the F1-anti-parent effect, discovered almost a century ago in the tumor transplantation field: when F1 hybrid animals between two H-2-different inbred strains of mice receive grafts from either parental strain donor: the recipient is unable to reject them (Little and Tizzer, 1916;AD Barnes and Krohn, 1957). When the grafts are hematopoietic tissue or simply immune system cells, the lack of rejection allows the grafted tissue to mount an immune response against the host, resulting in potentially lethal graft-versus-host disease.When the parental inbred strain tissue donors (H-2 homozygous) are embryonic or neonatal, however, their immunological "immaturity" reduces the impact of graftversus-host disease against the heterozygous recipients, which may, thus, engraft and remain H-2-chimeric and free of graft-versus-host disease.For this reason, we retroactively explored the results of HLA-homozygous donor cord blood grafts to heterozygous recipients (I.E., AA→AB) as the sole HLA-mismatch (GVH-only mismatches) (Table 2).Consequently, both HLA mismatching and the direction of mismatching affect the relative risks (RR) and the significance (p) for clinical endpoints.The directional effect is particularly interesting in the case of leukemic relapse, where rejection-only mismatches increase the probability of relapse (compared to bilateral mismatches) with a P > 0.001; while GvHD-only mismatches lead to significant clinical improvement in engraftment and lower mortality.Because the numbers of homozygotes are small, the P values usually don't differ significantly for transplants with two mismatches from the reference value (with one mismatch) although they display a trend towards significant clinical improvement. It is important to note that, because ~20% of donors in our inventory are homozygous for one or more HLA antigens, the probability of finding donors who are HLA mismatched only by lacking one (or more) recipient antigen(s), is not too small.Overall, accepting GvHD-only, one-way, mismatches would almost double the chance of finding donor grafts that perform as well as fully matched ones and would yield improved survival data.Furthermore, avoiding mismatches for HLA loci that are homozygous in leukemic patients (rejection-only mismatches) would also reduce significantly (and dramatically) the probability of relapse. Non-inherited maternal antigens (NIMA): Another potentially most important modification of outcomes by immunogenetic effects is the demonstration that the already mentioned NIMA effect is operational in cord blood transplantation.Thus, NIMA matched HLA mismatches (antigens shared by the graft donor's mother and the recipient, but absent from the donor and the graft itself) result in remarkable reductions or absence of the cord blood's reactivity to maternal non-inherited HLA antigens present in the recipient.In collaboration with van Rood's group, the National Cord Blood Program of the New York Blood Center has encountered evidence that a defined HLA mismatch between donor and recipient may not decrease engraftment or survival and thus, be "permissible", when the mismatched HLA antigen of the recipient was present in the donor's mother's HLA type as a NIMA (van Rood et al., 2009).In this clinical situation, a transplant where the one-or two-HLA mismatch co-existed with a NIMA match, had significantly faster engraftment (p=0.043) and better survival: transplant-related mortality (TRM) (p=0.012) and overall survival (p=0.029),especially in patients >10 years old.The "protection" conferred by a single NIMA-match against decreased patient survival in transplants with two HLA mismatches was less than in those with a single HLA mismatch and similar to those of patients with a single HLA mismatch and no NIMA match (van Rood et al., 2009).The probability of these differences is currently not very strong, as only small numbers of patients happened to have been transplanted with HLA mismatches that were NIMA matches.The study was done (and transplant NIMA-status determined) retrospectively, because not all mothers had been HLA-typed up front.The number of transplants with NIMA-matched donors was, therefore, much smaller than it would have been if the HLA type of all mothers (and thus the pre-transplant ascertainment of NIMAs) had been part of the transplant selection criteria.The results are just sufficiently strong to support the proposal of a prospective trial in which donor NIMA information is used in the selection of the best possible cord blood match for patients.2004) (A full report on these data has just appeared on line (Barker et al., 2009) In practical terms, if the information obtained in the small proportion of mismatched transplants that had a NIMA for a mismatched HLA antigen were confirmed and could be used prospectively, the number of better matched grafts available would increase by five-to ten-fold.The reason is that any potential HLA mismatch for which the donor has a NIMA match (say, patient HLA-A1, -A2; donor -A1, -A3; but where the donor's mother's type was HLA-A2, -A3) would be a "permissible mismatch".Because any recipient's HLA mismatched antigen could be present in a given donor's mother as a NIMA, disclosing the maternal NIMA haplotype would identify NIMA matches for any single A, B or DR mismatch.Then, NIMAs would extend dramatically the probability of finding appropriate donors for patients within relatively small donor panels, as it is likely to be the case for ethnic minority patients.The potentially multiplicative effect of NIMA on the probability of ascertaining excellent donors promises to be a further major contribution of Immunogenetics research to Medicine by accelerating the attainment of sufficiently large donor panels for graft selection (and reduce the cost of such panels).More importantly, it will also contribute to understanding the biology of mother-child immunological influences during pregnancy, by investigating the molecular specificity of the tolerogenic interactions that take place. CONCLUSIONS The use of immunogenetics technology in the definition of the broad mechanisms of transplant acceptance and rejection and their dependence on histocompatibility matching, has provided very important improvements on the probability of success in clinical transplantation.This short review of issues relevant to selecting unrelated donors for patients needing transplants, attempts to highlight the contribution that basic science has already had.The remarkable insights of the pioneers whose contributions built the current clinical matching capabilities, have led to our capability of selecting matched donors and also to the identification of mismatched donors whose mismatches are "permissible" (in the context of current immunosuppressive therapies).The latter may improve the odds of finding donors appropriate to specific recipients by virtue of NIMA influences and at least for cord blood, by advantageously utilizing unidirectional mismatches with donors homozygous at one of more HLA loci.After their confirmation and broad application in donor selection, cord blood could revolutionize the clinical results of hematopoietic transplantation and extend access to many more patients.This superior contribution to Medicine and public health will additionally help define the mechanisms and consequences of the polymorphism of major histocompatibility systems in the development of self-nonself immune discrimination. Table 1 Interaction between HLA and Cell Dose IN cord blood transplants: (Single CB Transplants with outcome data, N = 1667)
2017-04-01T12:47:16.014Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "5177bc2636c875d040e5b26633be06c249d48c82", "oa_license": "CCBY", "oa_url": "http://www.scielo.cl/pdf/bres/v43n3/art11.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5177bc2636c875d040e5b26633be06c249d48c82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231670762
pes2o/s2orc
v3-fos-license
Coping with Stress During the Coronavirus Outbreak: the Contribution of Big Five Personality Traits and Social Support This study investigated the relationships between active, problem-focused, and maladaptive coping with stress during the Coronavirus outbreak, the Big Five personality traits, and social support among Israeli-Palestinian college students (n = 625). Emotion-focused coping negatively correlated with social support, openness, extraversion, conscientiousness, and agreeableness, while it positively correlated with neuroticism. On the other hand, problem-focused coping was found to positively correlate with social support, openness, extraversion, conscientiousness, and agreeableness, but negatively correlate with neuroticism. Thus, positive social support may increase one’s ability to cope actively, adaptively, and efficiently. In addition, Israeli-Palestinian college students high in openness, extraversion, agreeableness, and conscientiousness tend to use active problem-focused coping while those high in neuroticism tend to use maladaptive emotion-focused coping. Africa due to Ebola (Drazen et al. 2014), and the 2015 quarantine in Korea due to MERS (Jeong et al. 2016)-which all show a significant increase in mental health problems among quarantined individuals. Individuals in crisis are highly prone to experience increased levels of stress, tension, and anxiety. Nevertheless, various types of stressors affect people differently depending on each individual's coping abilities and capabilities (Byrd and McKinney 2012). Many scholars have thus identified the types of personal (individual) and interpersonal (social) stressors that can be controlled during this period as means to find ways to cope with them. With specific relevance to personal (individual) stressors include confinement, loss of structure and routine, confusion, uncertainty and fear of the unknown, fear of infection, poor concentration, reduced physical activity and exposure to sunlight, sleep disturbances and excessive use of digital media before bedtime, changes in eating patterns, and high consumption of COVID-19related news and media (Altena et al. 2020;Brooks et al. 2020;Buheji and Ahmed 2020;Cellini et al. 2020;Hiremath et al. 2020;Torales et al. 2020). Coping Strategies Coping is a regulatory process that serves to reduce the negative emotional effects of stressful events ( van Berkel 2009). Here, coping strategies refer to the methods adopted and practiced by individuals as means to deal with various stressors. Scholars have identified multiple strategies for coping and further examined how these various coping strategies are influenced by intrapersonal, interpersonal, and environmental factors. Consequently, psychologists have distinguished between more than 400 coping strategies (Skinner et al. 2003), and these have been mainly classified either within the "approach-or-avoidance" model (Finset et al. 2002;Roth and Cohen 1986) or the emotion-or problem-focused coping model (Lazarus and Folkman 1984). The most common categories used, nevertheless, fall within the problemand emotion-focused coping model. Problem-focused coping is defined as individual efforts to "employ active strategies to resolve the stressors" (Riley and Park 2014). This type of coping is usually employed when individuals feel that something constructive can be done directly to alter the source of their stress (Folkman and Lazarus 1980). Hence, problem-focused coping includes problem-solving strategies and task-oriented actions such as planning, seeking instrumental support, and following through on steps that can directly reduce or resolve the problem (Carver and Connor-Smith 2010;Lazarus and Folkman 1984;Nes and Segerstrom 2006). Research shows that problem-focused coping is usually associated with adaptive outcomes such as better academic performance (MacCann et al. 2011) and higher marital satisfaction (Stoneman et al. 2006). Preliminary studies have also demonstrated that adaptive, problem-focused coping strategies are associated with greater psychological well-being during the COVID-19 quarantine precautions (Fu et al. 2020;Rogowska et al. 2020). On the other hand, emotion-focused coping includes "processing and expressing feelings arising from the stressor" (Riley and Park 2014). In other words, emotion-focused coping can be defined as the process of employing emotion-based strategies in an attempt to reduce or manage the emotional distress evoked by a situation (Carver and Connor-Smith 2010;Lazarus and Folkman 1984). This can involve adaptive strategies such as reappraising or reinterpreting a stressor as nonthreatening (Lazarus 1993) or attempting to relax using breathing techniques (Nes and Segerstrom 2006). However, emotion-focused coping can also involve maladaptive strategies such as wishful thinking, denial, avoidance, self-blame, and interpersonal withdrawal (Carver and Connor-Smith 2010). Research tends to show that adaptive forms of emotion-focused coping are associated with positive outcomes (Austenfeld and Stanton 2004), whereas maladaptive forms of emotionfocused coping are associated with a range of negative emotions and cognitions (Carver et al. 1989;Compas et al. 2001;O'Brien and DeLongis 1996). Importantly, initial studies have found that using maladaptive emotion-focused coping strategies during the COVID-19 quarantine period has been related to poorer psychological functioning (e.g., elevated anxiety and depression) (Fu et al. 2020;Rogowska et al. 2020). Given that individuals react to stress in various ways, understanding the likelihood of coping with stressors adaptively or maladaptively requires an examination of the moderating factors-e.g., individual differences, personality traits and/or social relations-that link individuals to different types of stressors (Chai and Low 2015). Big Five Personality Traits and Coping Strategies Students are faced with numerous, unique developmental challenges (e.g., academic performance, career choice, peer acceptance). As such, their ability to cope effectively is crucial under the transactional model of stress developed by Lazarus and colleagues (Lazarus 1966;Lazarus and Folkman 1984). Broad associations observed between personality traits and coping with stress may provide important insight into individual differences in patterns of thinking, feeling, and behaving that translate to managing stress through adaptive versus maladaptive coping (Zainah et al. 2019). The Big Five personality traits are emotional stability/ neuroticism, extraversion, openness, agreeableness, and conscientiousness, and they are viewed as the basic dimensions of personality (Costa and McCrae 2008). The Big Five personality traits refer to the behavioral patterns that individuals with certain personality traits exhibit over time (Ezeakabekwe and Nwankwo 2020). As specified in the socio-genomic model, personality traits are defined as relatively enduring, automatic patterns of thoughts, feelings, and behaviors that distinguish individuals from each other (Roberts 2017). Many researchers have looked into the relationship between coping and the Big Five personality traits (e.g., Connor-Smith and Flachsbart 2007;Costa et al. 1996;Lee-Baggley et al. 2005;McCrae and Costa Jr 1986;O'Brien and DeLongis 1996;Parkes 1986;Penley and Tomaka 2002;Preece, & Delongis;Watson and Hubbard 1996). Some studies have shown that non-adaptive personality traits like neuroticism are positively associated with avoidance coping (Connor-Smith and Flachsbart 2007;Penley and Tomaka 2002;Watson and Hubbard 1996). In contrast, adaptive personality traits like conscientiousness are positively related to active coping styles (e.g., planning and problem solving) (O'Brien and DeLongis 1996;Watson and Hubbard 1996). The association between personality traits and coping strategies suggest that individuals with maladaptive personalities are at a greater risk of experiencing psychological distress, as they usually employ maladaptive coping strategies like avoidant coping (Holahan et al. 2005). However, not all findings about the relationship between personality traits and coping strategies are consistent. For instance, some researchers were not able to find a significant relationship between coping and personality traits like agreeableness, conscientiousness, and openness (David and Suls 1999;Hooker et al. 1994). Moreover, researches were also unable to find a significant relationship between extraversion and problem-focused coping (Hooker et al. 1994;O'Brien and DeLongis 1996) and between extraversion and adaptive forms of emotionfocused coping, such as seeking support and accepting responsibility (David and Suls 1999;O'Brien and DeLongis 1996). Thus, a more nuanced look at associations with each of the five personality traits is needed to provide context for the present work. Neuroticism (N) Neuroticism (N) refers to personalities that are more vulnerable to experiencing emotional instability and self-consciousness (Costa and McCrae 2008). Scholars have found that neuroticism (N) is positively correlated with perceived stress (Mirhaghi and Sarabian 2016). Individuals high in N are prone to experience negative emotions such as depression, anxiety, or anger, and they tend to be impulsive and self-conscious (McCrae 1992;McCrae and Costa Jr. 1987). Therefore, these individuals are generally more prone to experiencing psychological distress as their subversive emotions interfere with their adaptation process. Such subversive emotions are the result of irrational thoughts, less ability to control self-motivation, and deal more negatively with stress (Digman 1989;Digman and Inouye 1986;Mervielde et al. 1995). In terms of coping strategies, scholars have found that individuals high in N specifically employ avoidance coping more than other strategies (David and Suls 1999;Gunthert et al. 1999;O'Brien and DeLongis 1996;Roesch et al. 2006;Zainah et al. 2019), though maladaptive emotion-focused coping strategies (e.g., substance abuse, behavioral disengagement, venting, and self-blame) are also common (Boyes and French 2010). This is since individuals who are high in N are more susceptible to psychological distress, prone to irrational thoughts, and are less able to control their impulses (Costa and McCrae 1992). In addition to avoidance, neuroticism was also found to be linked to immature coping strategies such as self-blame and fantasizing (Wang and Miao 2009). However, neuroticism (N) has also been related to may be positively related to adaptive emotion-focused coping (e.g., seeking emotional support) (Smith et al. 1989), suggesting some individual variability. Nevertheless, as shown in the literature, neuroticism is most substantially related to maladaptive coping, and one reason for this is that individuals high in N have usually experienced acute fear and traumatic distress (Hengartner et al. 2017). Extraversion (E) Extraversion (E) refers to the tendency to be outgoing, prefer large groups and gatherings, and be assertive, active, and talkative. Individuals high in E are found to be highly upbeat, energetic, and optimistic (Digman 1989;Digman and Inouye 1986;Mervielde et al. 1995). Moreover, extraversion reflects the tendency to be gregarious, enthusiastic, and assertive, and to seek excitement. This could be one of the reasons why Extraversion has been found to be negatively correlated with stress and anxiety (Mirhaghi and Sarabian 2016). Yet, like others, individuals high in E experience distress, but they may be able to cope with life's stressors more effectively than others, in a manner that reduces the negative consequences of experiencing distress. In terms of coping strategies, previous studies have found that extraversion (E) was positively correlated with active coping strategies, such as problem solving and seeking support (Karimzade and Besharat 2011;Vollrath and Torgersen 2000;Wang and Miao 2009;Zainah et al. 2019). More specifically, Extraversion was found to be positively related to active problem-focused coping (e.g., problem solving and planning) and adaptive emotionfocused coping (e.g., positive reframing, humor, seeking support, and acceptance) (Roesch et al. 2006). This is since individuals who are high in E are usually cheerful and motivated, and thus they often engage in active coping and positive reappraisal (Amirkhan et al. 1995;Costa et al. 1996;e.g., Watson and Clark 1992;Watson and Hubbard 1996). It is noteworthy to mention that some studies have found no relationship between Extraversion and coping (O'Brien and DeLongis 1996). Thus, studying this relationship is still an underdeveloped area of study with potential nuances and individual differences. Conscientiousness (C) Conscientiousness (C) refers to the tendency to resist impulses and temptations. Hence, individuals high in C usually strive for dutifulness and competence (Costa and McCrae 2008). Moreover, these individuals are found to be highly purposeful, strong-willed, and determined. On the positive side, individuals high in C are usually associated with academic and occupational achievements, but, on the negative side, these individuals may be drawn to annoying fastidiousness, compulsive neatness, and workaholic behavior (Digman 1989;Digman and Inouye 1986;Mervielde et al. 1995). Nevertheless, Conscientiousness was found to negatively correlate with stress (Mirhaghi and Sarabian 2016). For instance, studies that looked into the relationship between Conscientiousness and occupational stress found that Conscientiousness serves as a protective factor against exhaustion due to prolonged occupational stress (Swider and Zimmerman 2010). Although Conscientiousness usually correlates negatively with stress anxiety, some studies have found contradictory associations (Scher and Osterman 2002;Sheridan et al. 2015). Vreeke and Muris (2012) found that Conscientiousness served as a positive predictor of behavioral inhibition-a component of anxiety that is characterized by timidity and withdrawal. This suggests that conscientiousness and anxiety may be interrelated and can co-exist in individuals. Furthermore, previous research has indicated that there is a significant interaction between anxious arousal and conscientiousness in predicting ambition. This interaction showed that physically anxious individuals with higher conscientiousness experience higher levels of professional ambition, as compared to physically anxious individuals with lower levels of conscientiousness (Chandra et al. 2020). This may be because individuals high in Conscientiousness are careful planners and extremely rational decision makers especially in situations where they encounter a stressor (Chartrand et al. 1993;Hooker et al. 1994;Vollrath et al. 1994). In terms of coping strategies, Conscientiousness was found to be positively related to active problem-focused (e.g., planning) and adaptive emotion-focused coping (e.g., positive reframing, humor and acceptance) (Karimzade and Besharat 2011;Leandro and Castillo 2010;Roesch et al. 2006). This is since individuals high in Conscientiousness are purposeful, strong-willed, and determined, and they are deliberate before they act (Costa and McCrae 1992). Openness (O) Openness (O) is a cognitive disposition to creativity and esthetics (Costa and McCrae 2008;Zainah et al. 2019). Individuals high in Openness (O) have the tendency to be curious about both their inner and outer worlds. These individuals are found to experience active imagination, esthetic sensitivity, and attentiveness to inner feelings, and they usually display a preference for a variety of things, intellectual curiosity, and independent judgment. These individuals are usually unconventional, willing to question authority, and ready to debate new ethical and social ideas (Digman 1989;Digman and Inouye 1986;Mervielde et al. 1995). In terms of coping strategies, Openness has been positively related to active problemfocused (e.g., planning) and adaptive emotion-focused coping (e.g., positive reframing, humor, and acceptance) (Chai and Low 2015;McCrae and Costa Jr 1986;Roesch et al. 2006;Watson and Hubbard 1996). This is because individuals high in Openness are curious about both inner and outer worlds, and they are perceived to be more flexible and creative than others. Hence, these individuals are able to cope more effectively than others as they have the capacity to employ multiple coping methods simultaneously to minimize the effects of the stressor and/or the distress experienced (Roesch et al. 2006). Agreeableness (A) Lastly, Agreeableness (A) is defined as the tendency to be altruistic, sympathetic to others and eager to help them, and believe that others will be equally helpful in return (Digman 1989;Digman and Inouye 1986;Mervielde et al. 1995). Agreeableness was found to be negatively correlated with stress (Mirhaghi and Sarabian 2016). However, when individuals high in Agreeableness experience stress, they usually employ adaptive coping strategies like active coping and humor. Moreover, Agreeableness has been negatively associated with impulsive behaviors like substance abuse (Hengartner et al. 2017). Social Support and Coping Strategies Social support can be defined as providing care, information, assistance, and/or resources to individuals in a manner that facilitates their adaptation to life's stressors (Cutrona 1996). Social support is an important factor that helps reduce the negative effects of stress. Prior research has shown that social support has the ability to improve physiological and mental health, increase one's adaptation to chronic diseases, and reduce mortality rates (Umberson and Karas Montez 2010). Additionally, high levels of social support make it easier for the individual to establish better self-esteem, increase the individual's perception of their ability to cope with the stress, and strengthen the individual's perception of their capability to solve problems and minimize the severity of stressors (Wang et al. 2014). Most scholars agree that social networks can act as an invaluable coping resource. This even has specific relevance to members of the Israeli-Palestinian community living in Israel who demonstrate the positive effects of social support, especially when dealing with stressful conditions (Agbaria 2013; Agbaria 2019; Agbaria and Bdier 2019; Agbaria et al. 2017;Agbaria and Natur 2018;Agbaria et al. 2012). Nonetheless, some studies have also suggested that social networks can act as an impediment to adaptive coping. Although attempts to provide social support are usually well intentioned, these attempts are not always perceived as helpful by the recipient (Dakof and Taylor 1990). Also, disappointment with social support may rise due to a perceived lack of support, a failure of support attempts to match the needs of the recipient, or mal-intended interactions with support providers such as including criticism or avoidance (Revenson 1990). Individuals may thus be more likely to engage in maladaptive or counterproductive modes of coping-which can have tremendous negative repercussions on their own well-being-if they perceive that support is lacking from significant others, or when one is dissatisfied with support provided (DeLongis and Holtzman 2005). The Present Study This current study is the first to investigate the relationship between coping with stress due to the Coronavirus outbreak, the Big Five personality traits, and social support among Israeli-Palestinian college students (n = 625). This comprehensive examination of the relationship between personality traits and coping with stress may shed light on individual differences relevant to coping with the Coronavirus pandemic. Furthermore, this work will explore the given variables in the unique context of the Palestinian-Israeli minority living in Israel. Importantly, no prior studies have examined these variables within the specific Israeli-Palestinian student population of interest, which highlights the novelty of the present research. The Israeli-Palestinian minority living in Israel is of particular uniqueness; this community lives as a collective society that is experiencing rapid modernization along with "Israelization" on one hand (Al Hajj 1996) while also experiencing movements in the opposite direction by Islamization and "Palestinianization" (Samoha 2004). In addition to the formal and informal discrimination faced by many minorities, the Israeli-Palestinian community in Israel has to lead a double life as an undesirable ethnic, religious, and political minority living in a Jewishmajority state. Hence, this community has captured itself as a society on the fringes of Israeli society, being disadvantaged in terms of social and health services, along with a lower economic status and lower mental health welfare when compared to the Jewish community living in Israel. This population, thus, faces a unique dichotomy within their own culture that may cause challenges to their identity formation (Ericson 1962;Sue and Sue 2003); this may inform the relationship between individual characteristics and stress coping strategies. During the Coronavirus crisis, the Israeli-Palestinian community was further marginalized in the public discourse as it does not meet the "priority criteria" in governmental health services. Therefore, this community has been excluded from the general mass COVID-19 tests carried out by the government as well as the public guidelines around the disease (Khoury 2020). Therefore, the question of what support sources Israeli-Palestinians have is significant in the shadow of these conditions. It is unknown how the individual differences of personality traits and social support, which are known to be relevant to adaptive coping during stressful times, may apply to the unique Israeli-Palestinian student population of interest. This research is novel and necessary to better examine who among this potentially marginalized group may be at risk for maladaptive coping strategies during the Coronavirus crisis. Based on previous research findings, the current study hypothesizes that among Israeli-Palestinian college students in Israel, (1) social support, openness, extraversion, conscientiousness, and agreeableness will be negatively associated with emotion-focused coping; (2) social support, openness, extraversion, conscientiousness, and agreeableness will be positively associated with problem-focused coping; (3) neuroticism will be negatively associated with problem-focused coping; and (4) neuroticism will be positively associated with emotionfocused coping. In addition to these associations, exploratory regression analyses were used to understand how each of the individual characteristics (Big Five personality traits and social support) uniquely relate to the outcomes of coping with stressors using problem-focused coping and emotion-focused coping. Sample The sample consisted of 625 Israeli-Palestinian college students, 72% of whom were females and 28% males. Participants' ages ranged from 19 to 30 years old (M = 24.8, SD = 5.88). The participants were recruited using non-random convenience sampling from eight colleges in Israel. Sixty-five percent of the participants grew up in villages (rural) and 35% of them grew up in cities (urban). In terms of religious affiliation, 80% of the participants identified as Muslims, 15% identified as Christians, and 5% identified as Druze. This research was conducted at the beginning of the Coronavirus outbreak in order to capture acute associations of individual differences with coping strategies at the onset of this stressful situation. Measures Each measure was translated from English to Arabic by five interdisciplinary Israeli-Palestinian scholars. The measures were assessed by the researchers in collaboration with the five experts, and they were checked for the clarity and cultural appropriateness of the content and translation. The questionnaires were then translated back into English by an independent expert in translation. Finally, 40 Israeli-Palestinian students completed the questionnaires and provided feedback on the process. Demographic Variables Questionnaire The variables included in this instrument are gender, age, residence, and religion. Coping Style Questionnaire This questionnaire is based on Carver, Scheier, and Weintruab's (1989) questionnaire that was translated into Arabic (Odeh 2014). The Arabic version includes 30 items assessed according to a scale of 1 = not at all and 5 = to a great extent. These items contain variables classified according to problem-focused coping (alpha = .70) and emotionfocused coping (alpha = .67). For example, an item assessing problem-focused coping is (from the English translation): "I try to come up with a strategy about what to do." All the items were with a load level greater than 0.40. Big Five Personality Trait Short Questionnaire (BFPTSQ) The BFPTSQ was developed by John and Srivastava (1999), and it consists of 44 items answered on a five-point Likert-type response format (totally disagree = 1 to totally agree = 5). It assesses the five personality traits: extraversion, agreeableness, conscientiousness, emotional stability/ neuroticism, and openness. In the present study, Cronbach alpha are as follows: extraversion (α = .74), agreeableness (α = .67), conscientiousness (α = .73), neuroticism (α = .75), and openness (α = .72). For example, an item assessing openness (from the English translation) is: "I see myself as someone who is inventive." All the items were with a load level greater than 0.40. Social Support Questionnaire This questionnaire includes 12 items assessing the characteristics of the support system, i.e., the degree of support provided and from which sources, its frequency, and its availability (Cohen and Wills 1985). The questionnaire uses a scale from 1 to 4 (1 = very untypical of me, to 4 = very typical of me). The higher the score, the greater social support received. Reverse items in the social support questionnaire are 1, 2, 7, 8, 11, and 12. The Israeli-Palestinian version of this questionnaire received a high Cronbach's alpha coefficient (α = 0.90) in a study by Agbaria (2014). In the Israeli-Palestinian version, a sample question is: "I feel that there is no one I can share my most private worries and fears with." In the present study, Cronbach alpha was (α = .87). All the items were with a load level greater than 0.40. Research Procedure The study sample was recruited by non-random convenience sampling at eight colleges in Israel. The research was conducted in 2020 over the course of 3 months, i.e., during the beginning of the Coronavirus Outbreak in Israel. After obtaining the needed clearances from each of the colleges as well as the ethical committee at Al-Qasemi College, the questionnaires were distributed using Google forum, stressing that the questionnaires would remain anonymous. Eighty percent of the students who received the questionnaire agreed to participate. Statistical Analysis Descriptive statistics for the study variables (problem-focused coping, emotion-focused coping, Big Five personality traits, and social support) were examined. Assumptions of normality, homogeneity of variances, linearity, and independence were confirmed, demonstrating the appropriateness of conducting the parametric testing. In order to examine the study hypotheses, two approaches were used. First, bivariate correlations were examined for all study variables. Correlations were considered to have a small effect size if r = |.10-.29|, medium if r = |.30-.49|, and large if r = |.50-.1.00|. Second, linear multiple regression models were employed to test the associations of Big Five personality traits and social support with employing problem-focused coping and emotion-focused coping. The multiple regression model was used in order to understand how each of the individual characteristics (Big Five personality traits and social support) uniquely relate to the outcomes of coping with stressors using problem-focused coping and emotion-focused coping, while also controlling for other independent variables in the model. Table 1 shows the descriptive statistics of the study variables. Overall, the students exhibited medium scores in the coping styles questionnaire, medium scores in each of the BFPTSQ subscores, and medium-high scores in the social support questionnaire. Table 2 reveals the correlations between the study variables. Consistent with the first study hypothesis, there was a significant negative correlation between emotion-focused coping scores and social support (r = −.40, p < .01), BFPTSQ openness (r = −.30, p < .01), BFPTSQ extraversion (r = −.32, p < .01), BFPTSQ conscientiousness (r = −.33, p < .01), and BFPTSQ agreeableness (r = −.34, p < .01). Consistent with the third study hypothesis, there was a significant negative correlation between problem-focused coping scores and BFPTSQ neuroticism (r = −.36, p < .01). The regression model (Table 4) found that BFPTSQ neuroticism explains the significant amount of variance in problem-focused coping scores (β = −.23**, p < 0.01). Consistent with the fourth study hypothesis, there was a significant positive correlation between emotion-focused coping scores and BFPTSQ neuroticism (r = .42, p < .01). The regression model (Table 4) found that BFPTSQ neuroticism also explains the significant amount of variance in emotion-focused coping scores (β = .22, p < 0.01). Discussion This study examined the relationship between coping with stress, the Big Five personality traits, and social support among Israeli-Palestinian college students in order to help elucidate the individual characteristics that may assist students while coping with stressful conditions. The results show that adaptive personality traits (openness, extraversion, conscientiousness, agreeableness) were positively correlated with problem-focused coping and negatively correlated with emotion-focused coping, whereas maladaptive personality traits (neuroticism) were negatively correlated with problem-focused coping and positively correlated with emotionfocused coping. Furthermore, the results of this study demonstrate that openness and conscientiousness have the most significant positive correlation with problem-engagement as an active problem-focused coping strategy, while extraversion and agreeableness have the most significant positive correlation with positive reinterpretation and growth. These findings add important insight to preliminary research demonstrating that adaptive coping strategies lead to more favorable psychosocial outcomes during COVID-19 by highlighting the individual differences among Israeli-Palestinian students that may enhance risk for poorer psychological functioning during this time. Big Five Personality Traits and Coping Styles Consistent with previous research, individuals with greater openness, extraversion, conscientiousness, and agreeableness were found to be more likely to employ more problem-focused coping and less maladaptive emotion-focused coping, whereas those with greater neuroticism were found to be less likely to employ problem-focused coping and more likely to employ maladaptive emotion-focused coping. Openness was found to be positively related to problem-focused coping and negatively related to emotion-focused coping, which is consistent with prior findings that demonstrate the importance of openness for adaptive coping (Chai and Low 2015;Penley and Tomaka 2002), which may be explained by Openness including curiosity about both inner and outer worlds, and thus individuals who score high in the Openness measure are more flexible and creative than others. Additionally, individuals who score high in the Openness measure experience their emotions in a comfortable manner, as they are able to accept their emotions as well as others' emotions better. This suggests that these individuals are able to cope with stress flexibly by using multiple coping strategies to minimize the negative effects of stressor and/ or the distress experienced (Roesch et al. 2006). Therefore, these individuals tend to use problem-focused coping more often than emotion-focused coping (Leandro and Castillo 2010;Suls et al. 1998). Extraversion was also found to be positively related to problem-focused coping and negatively related to emotion-focused coping, which is consistent with previous studies that highlight the importance of extraversion for adaptive coping (Karimzade and Besharat 2011;Roesch et al. 2006;Wang and Miao 2009;Zainah et al. 2019). These findings can be explained by considering how individuals who score high in the extraversion measure are usually drawn towards excitement and optimism. When facing a stressful situation or event, these individuals employ coping strategies that support their interpersonal relationships. Previous research has indicated that individuals who score high in the extraversion measure use active coping strategies and positive reappraisal. For this reason, individuals who are high in extraversion have the ability to cope flexibly, as they are able to adapt their coping responses according to each situation (Fickova 2009;Marnie 2008). Likewise, agreeableness was found to be positively related to problem-focused coping and negatively related to emotion-focused coping, which parallels studies that suggest the importance of agreeableness for adaptive coping (Fickova 2009;Leandro and Castillo 2010). One explanation for this can be viewed in the notion than individuals who score high in the agreeableness measure have a preference for altruism, self-satisfaction, trust, and usefulness. These individuals, thus, use positive reappraisal strategies, social support, and careful planning to a much greater extent that maladaptive emotion-focused coping strategies such as selfblaming and avoidance (Fickova 2009;Leandro and Castillo 2010). This leads to the conclusion that agreeableness is a positive trait that is specifically helpful during times of crises. This is because individuals high in agreeableness usually avoid conflict and do not take advantage of others (Costa and McCrae 1992), which are considered key characteristics for dealing more effectively with stress caused by a crisis (Mirhaghi and Sarabian 2016). Conscientiousness was also found to be positively related to problem-focused coping and negatively related to emotion-focused coping, which aligns with existing studies that note the importance of conscientiousness for adaptive coping (Karimzade and Besharat 2011;Leandro and Castillo 2010;Roesch et al. 2006). Such findings can be explained by the notion that individuals who score high in the conscientiousness measure are usually careful planners and rational decision, especially when being faced with a stressor (Chartrand et al.,1993;Hooker et al. 1994;Vollrath et al. 1994). These individuals, thus, tend to use active problem-focused coping strategies and avoid maladaptive emotion-focused coping strategies (McCrae and Costa Jr 1986;Ramírez-Maestre et al. 2004;Leandro and Castillo 2010). At the other end of the spectrum, a person who is high in neuroticism has a tendency to easily experience emotional imbalances. In contrast to the findings above, Neuroticism was found to be negatively related to problem-focused coping and positively related to emotionfocused coping, which is consistent with prior research demonstrating the positive correlation between neuroticism and maladaptive coping strategies (Carlo et al. 2012;Chwaszcz et al. 2018;Roesch et al. 2006;Zainah et al. 2019). One explanation for this can be found in the notion that individuals who are high in Neuroticism are more susceptible to psychological distress and irrational thoughts and are less able to control their impulses (Costa and McCrae 1992). Therefore, many researchers have found that Neuroticism is closely related to maladaptive and passive coping strategies such as self-blame, fantasizing and avoidance (Hengartner et al. 2017;Eksi 2010;Wang and Miao 2009). Social Support and Coping Styles The study results indicate that social support positively correlated with problem-focused coping and negatively correlated with emotion focused-coping, consistent with prior studies that highlight the importance of perceived social support for adaptive coping (Agbaria 2019; Agbaria and Bdier 2019; Agbaria and Natur 2018; Umberson and Karas Montez 2010; Wang et al. 2014). This can be explained by the notion that social support is effective in enhancing one's well-being because it acts as a coping assistant (O'Brien and DeLongis 1996;Thoits 1986). Perceptions of the availability of support and perceptions of support attempts from close ones influences the use of specific coping strategies as well as the effectiveness of coping strategies employed (Carpenter and Scott 1992). Social relationships influence people's ways of coping in a number of ways. One way, for instance, is through the use of social referencing (Bandura 1986). That is, people turn to others for a sense of what is considered to be appropriate coping in a given situation. Social relationships also influence coping through providing information about the likely efficacy of particular coping strategies (Carpenter and Scott 1992). Limitations The results of this study should be interpreted in light of the following limitations. The sample was not recruited using a randomized subset of the larger population in order to understand features of one unique group, but future research may consider recruiting a more diverse sample to permit direct comparisons across racial and other religious groups a means to increase the generalizability of the current findings. In addition, the data was comprised only of self-report questionnaires, which may be subject to reporting bias. This may be especially high among college students who may be more likely to either conform to or rebel against social norms. Thus, further studies may consider additional tools of measurement (e.g., behavioral observations). Finally, there is need for future research to examine the relationships between coping with stress, the Big Five personality traits, and social support by testing a wider variety of moderating and mediating effects. Conclusions This study is the first to investigate the relationship between coping with stress at the beginning of the Coronavirus outbreak, the Big Five personality traits, and social support among a unique population of Israeli-Palestinian college students. The aim of the study was to examine how personality traits and social support may increase one's adaptive coping. The results suggest that positive social support increases one's ability to cope actively, adaptively, and efficiently. In addition, the results demonstrate that individuals high in openness, extraversion, agreeableness, and conscientiousness tend to use active problem-focused coping while individuals high in neuroticism tend to use maladaptive emotion-focused coping. The present research provides valuable insight into coping with stress in a manner that may increase early identification and intervention efforts during the COVID-19 pandemic. Compliance with Ethical Standards Conflict of Interest The manuscript has only been submitted to The International journal of mental health and addiction, it will not be submitted elsewhere while under consideration, and it has not been published elsewhereeither in similar form or verbatim. I am responsible for the reported research and all authors have participated in the concept and design; analysis and interpretation of data; drafting or revising of the manuscript, and I have reviewed/approved the manuscript. There are no conflicts of interest. Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee (include name of committee + reference number) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed Consent Informed consent was obtained from all individual participants included in the study.
2021-01-22T15:31:55.395Z
2021-01-21T00:00:00.000
{ "year": 2021, "sha1": "cf0fb1ca9b2de5cdf858302080e382c076e40e78", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11469-021-00486-2.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7dd7bd782f9c99f6e3ff558bd63bd1463afa1c2f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
231869424
pes2o/s2orc
v3-fos-license
Leucokinins: Multifunctional Neuropeptides and Hormones in Insects and Other Invertebrates Leucokinins (LKs) constitute a neuropeptide family first discovered in a cockroach and later identified in numerous insects and several other invertebrates. The LK receptors are only distantly related to other known receptors. Among insects, there are many examples of species where genes encoding LKs and their receptors are absent. Furthermore, genomics has revealed that LK signaling is lacking in several of the invertebrate phyla and in vertebrates. In insects, the number and complexity of LK-expressing neurons vary, from the simple pattern in the Drosophila larva where the entire CNS has 20 neurons of 3 main types, to cockroaches with about 250 neurons of many different types. Common to all studied insects is the presence or 1–3 pairs of LK-expressing neurosecretory cells in each abdominal neuromere of the ventral nerve cord, that, at least in some insects, regulate secretion in Malpighian tubules. This review summarizes the diverse functional roles of LK signaling in insects, as well as other arthropods and mollusks. These functions include regulation of ion and water homeostasis, feeding, sleep–metabolism interactions, state-dependent memory formation, as well as modulation of gustatory sensitivity and nociception. Other functions are implied by the neuronal distribution of LK, but remain to be investigated. Introduction Neuropeptide signaling regulates major aspects of development, growth, reproduction, physiology, and behavior of animals. A large number of structurally diverse peptides have been identified that act on different types of receptors as co-transmitters, neuromodulators, and hormones [1][2][3][4][5][6]. In insects, one of the peptides that has attracted substantial attention recently is leucokinin (LK), although it was discovered in a cockroach more than 30 years ago [7]. We thus decided that it is timely to review what we know about LK signaling in insects and other invertebrates. Like many other well-known insect neuropeptides, LKs were first identified from extract of the head of the Madeira cockroach Leucophaea maderae (now Rhyparobia maderae) by assaying purified fractions for their activity on hindgut contractions of this animal (see [7][8][9]). Altogether, eight LKs (sequence-related paracopies) were identified in L. maderae, which share the C-terminus pentapeptide FXSWGamide [7,10]. Apart from stimulatory action on muscles, another early function assigned to LKs was a role as a diuretic factor that increases secretion in the Malpighian (renal) tubules of various insects [11][12][13][14]. As we shall see in later sections, we now know that LKs have truly pleiotropic functions as neuromodulators and hormones in insect development, physiology, and behavior. In earlier studies, these peptides were named kinins with a species prefix, such as achetakinins, muscakinins, and lymnokinins, and only later the original name leucokinin was adopted more generally for peptides with the generic C-terminus pentapeptide. Thus, we will use LK here, except when some species-specific aspect is discussed and a speciesspecific name has been assigned. Early on, it was suggested that the LKs are ancestrally related to the vertebrate tachykinins due to some rather minor amino acid sequence Outside arthropods, multiple LK paracopies are also known. In the annelid worm Urechis unicinctus, eight paracopies of LKs have been identified [56], and the largest number of LKs was found in the LK precursor of the marine slug Aplysia californica with 30 ( Figure 3) [22]. In some species, such as Frankliniella, Rhodnius, the bed bug Cimex lectularius, and Aplysia, the prepro-LK can give rise to additional non-LK peptides, resulting in a total of about 60 peptides in Aplysia [21,22,52,57]. Of note is that, to our knowledge, none of these non-LK peptides have been studied further in any organism (and very few have been verified by mass spectrometry). There are few cases of a species having more than one LK precursor; one is the squid Sepia officinalis with two prepro-LKs [58] ( Figure 3). [55], and Zeng et al. 2020 [20], respectively. Outside arthropods, multiple LK paracopies are also known. In the annelid worm Urechis unicinctus, eight paracopies of LKs have been identified [56], and the largest number of LKs was found in the LK precursor of the marine slug Aplysia californica with 30 ( Figure 3) [22]. In some species, such as Frankliniella, Rhodnius, the bed bug Cimex lectularius, and Aplysia, the prepro-LK can give rise to additional non-LK peptides, resulting in a total of about 60 peptides in Aplysia [21,22,52,57]. Of note is that, to our knowledge, none of these non-LK peptides have been studied further in any organism (and very few have been verified by mass spectrometry). There are few cases of a species having more than one LK precursor; one is the squid Sepia officinalis with two prepro-LKs [58] (Figure 3). [22], squid from [58], polychaete worm from [15], and cattle fever tick from [59]. A striking feature is that in many invertebrate species whose genomes have been sequenced, LK precursors have not been found. Actually, to our knowledge, only arthropods, tardigrades, annelids, and mollusks have thus far been shown to produce LK Except for Aplysia, red boxes represent leucokinins, and signal peptides are indicated by blue boxes. Primary sequence data of Aplysia are from [22], squid from [58], polychaete worm from [15], and cattle fever tick from [59]. A striking feature is that in many invertebrate species whose genomes have been sequenced, LK precursors have not been found. Actually, to our knowledge, only arthropods, tardigrades, annelids, and mollusks have thus far been shown to produce LK precursors. Even among insects, not all species have LKs. For instance, in the order Coleoptera (beetles), 34 species have been analyzed, and only in four, Pogonus chalceus, Gyrinus marinus, Carabus violaceus, and Carabus problematicus (all in suborder Adephaga), LK precursors were found [60][61][62]. Thus, no LK precursor was detected in the "model Coleopteran" Tribolium castaneum. LK precursors and receptors are missing also in, for example, some parasitic wasps (e.g., Nasonia vitripennis), but not in all [63,64], and were not found in any ant species analyzed to date [65][66][67]. An LK precursor is also missing in the phyllopod crustacean Daphnia [68], although they are present in decapod crustaceans (see [69]). It can be noted that an LK-like peptide sequence (Nlp43: KQFYAWAamide) has been identified in nematodes such as C. elegans [70] (see Figure 1), but it is not derived from a canonical LK precursor and no LKR could be found [3,15]. Furthermore, LK signaling components are not found in cnidarians (see [71,72]) or flatworms (Platyhelminthes) [73]. In lower bilaterians, such as species of Xenoturbella and Nemertodermatid worms (both phylum Xenacoelomorpha), orthologs of LK-type receptors were detected by bioinformatics, but no LK peptides [74]. Finally, in some species, such as honey bees, LK precursors have been identified that could generate three LKs, but the cleaved peptide products could not be detected by mass spectrometry [65]. However, since orthologs of LKRs have been identified in five sequenced bee genomes, it is likely that LK signaling is present in these hymenopterans, but not in ants [65,75]. In support of the importance of LK signaling in honeybees, a recent paper showed that the lkr gene influences labor division in foraging for pollen and nectar in the Asian honeybee (Apis cerana) [76]. As noted above, the LKRs identified seem to have no vertebrate orthologs and are found only in the invertebrate species where LK precursors have been detected (Figure 4), possibly with the exception of Xenacoelomorphs mentioned above. Only a few LKRs have been characterized by ligand activation (Figure 4). Thus, LK signaling is not universally present among invertebrates, in contrast to several other more widespread neuropeptides, such as adipokinetic hormone (AKH)/GnRH, neuropeptide F, and insulin-like peptides (see [1][2][3]15]), and this begs the question as to whether some other neuropeptide system has taken over LK functions. It is also interesting to note the large differences in number of paracopies in the different LK precursors, ranging from 1 to about 30. Amino acid sequences of full-length receptors were used for the analysis. Sequences were aligned using the Clustal X. Maximum likelihood trees were constructed by MEGA X software. The numbers at the nodes of the branches represent the percentage bootstrap support (1000 replications) for each branch. Receptors that have been functionally characterized are indicated by a red symbol after the species name. Sequences used to generate the phylogeny are provided in Supplementary Material Text File S1. LK Expression Is in Diverse Types of Neurons in the Cockroach L. maderae (R. maderae) The distribution of a given neuropeptide in neurons and other cells can provide some initial hints as to whether its functions are diverse or not. Thus, some peptides are present in very small sets of uniform neurons (e.g., SIFamide or eclosion hormone), suggesting few and/or orchestrating functions, and others in large populations of diverse types (such as short neuropeptide F (sNPF) and tachykinins), indicating multiple diverse functions [1,77]. Functional analysis has verified that some neuropeptides are utilized by neurons (and/or other cells) to globally orchestrate development, physiology, or behavior, and others play multiple distributed roles that are more localized and circuit-specific [1,77]. The latter type of peptide action may be in the form of cotransmission, together with other neurotransmitters or neuromodulators [78][79][80]. Therefore, what does the distribution of LKs in different insects tell us about their functions? The distribution of LKs was first analyzed in L. maderae, using antisera to LK-I that recognized the eight isoforms known at the time [26,27,32]. In this cockroach, the number and diversity in cell types expressing LKs is large, suggesting a wide range of functions for this set of peptides ( Figure 5A-C). Thus, we used these old LK immunohistochemical data to illustrate a peptidergic system quite different from that in Drosophila (see Figure 5D) and some other insects, such as locusts ( Figure 6A). Amino acid sequences of full-length receptors were used for the analysis. Sequences were aligned using the Clustal X. Maximum likelihood trees were constructed by MEGA X software. The numbers at the nodes of the branches represent the percentage bootstrap support (1000 replications) for each branch. Receptors that have been functionally characterized are indicated by a red symbol after the species name. Sequences used to generate the phylogeny are provided in Supplementary Material Text File S1. LK Expression Is in Diverse Types of Neurons in the Cockroach L. maderae (R. maderae) The distribution of a given neuropeptide in neurons and other cells can provide some initial hints as to whether its functions are diverse or not. Thus, some peptides are present in very small sets of uniform neurons (e.g., SIFamide or eclosion hormone), suggesting few and/or orchestrating functions, and others in large populations of diverse types (such as short neuropeptide F (sNPF) and tachykinins), indicating multiple diverse functions [1,77]. Functional analysis has verified that some neuropeptides are utilized by neurons (and/or other cells) to globally orchestrate development, physiology, or behavior, and others play multiple distributed roles that are more localized and circuit-specific [1,77]. The latter type of peptide action may be in the form of cotransmission, together with other neurotransmitters or neuromodulators [78][79][80]. Therefore, what does the distribution of LKs in different insects tell us about their functions? The distribution of LKs was first analyzed in L. maderae, using antisera to LK-I that recognized the eight isoforms known at the time [26,27,32]. In this cockroach, the number and diversity in cell types expressing LKs is large, suggesting a wide range of functions for this set of peptides ( Figure 5A-C). Thus, we used these old LK immunohistochemical data to illustrate a peptidergic system quite different from that in Drosophila (see Figure 5D) and some other insects, such as locusts ( Figure 6A). Panel A is altered from [34] with SIF neurons added [83], B is from [86], and C is altered from [87]. All figures used with permission from publishers. There are about 160 LK neurons with cell bodies in the protocerebrum of the brain ( Figure 5A), some in bilateral clusters and others occurring in bilateral pairs distributed in different regions [27]. No cell bodies were detected in deuto-and tritocerebrum, and only a small set of weakly immunoreactive neurons were detected in the fused subesophageal ganglion. In each of the two lateral neurosecretory cell (LNC) groups there LK cell bodies are predominantly found in protocerebrum (Protoc), including the optic lobes (OL) and accessory medulla (aMe; pacemaker region of clock), but some are in tritocerebrum (Tritoc). Neuronal process from LK neurons (not shown) are in the central body, optic lobe, and antennal lobe (AL), and less delineated neuropils are shown in all three brain neuromeres. A group of four neurons (SIFamide producing (SIFa)) in the pars intercerebralis coexpress SIFamide and LK. These SIFa neurons are known to send processes throughout the brain and ventral nerve cord [83,84] [34] with SIF neurons added [83], B is from [86], and C is altered from [87]. All figures used with permission from publishers. There are about 160 LK neurons with cell bodies in the protocerebrum of the brain ( Figure 5A), some in bilateral clusters and others occurring in bilateral pairs distributed in different regions [27]. No cell bodies were detected in deuto-and tritocerebrum, and only a small set of weakly immunoreactive neurons were detected in the fused subesophageal ganglion. In each of the two lateral neurosecretory cell (LNC) groups there were six LK cells, and in the median neurosecretory cell group (MNC), about 100 were found ( Figure 5A). Both the LNCs and MNCs send LK-immunolabeled axons to the neurohemal area of the corpora cardiaca, suggesting that the LKs can be released as hormones into the circulation. Radioimmunoassay analysis of HPLC-separated corpora cardiaca extracts suggested that all eight LKs known at the time are present in this tissue [32]. Furthermore, it was indeed shown by radioimmunoassay (RIA) that release of LKs can be triggered in vitro from the corpora cardiaca of both cockroach [88] and cricket [89]. Furthermore, in the bug Rhodnius, RIA of hemolymph demonstrated both LK and diuretic hormone (DH44) release after feeding, suggesting a postprandial hormonal role of LK [90]. In contrast to Drosophila where one pair of LK interneurons is seen in the brain and one pair in the subesophageal zone (SEZ) (Figure 5D), the cockroach brain has a complex set of interneurons ( Figure 5A-C). Different LK neurons, originating in the protocerebrum, send processes to the central body; optic lobe (medulla and lobula); antennal lobes; and to various neuropil regions in the proto-, deuto-, and tritocerebrum. Two pairs of large LKimmunoreactive descending neurons (DNs) send axons throughout the ventral nerve cord, finally ending in the terminal abdominal ganglion. These pairs of DNs have collateral arborizations ipsilaterally in most of the glomeruli of the antennal lobe and posterior deutocerebrum ( Figure 5A,B). A small set of branches from the DNs innervates the calyces of the mushroom bodies [81] ( Figure 5B). The group of LK neurons associated with the medulla [27] has been described in more detail as part of the accessory medulla complex that is a pacemaker region of the circadian clock [91][92][93]. Some of these LK neurons colocalize pigment-dispersing factor (PDF), which is one of the major neuromodulators of the clock in L. maderae and Drosophila [91,93,94]. Similar to Drosophila and other studied insects, each abdominal ganglion has sets of neurosecretory cells (ABLKs; abdominal ganglion LK neurons) expressing LK. However, instead of one pair of ABLKs in each ganglion/neuromere, as seen in Drosophila and some other dipteran flies [28], L. maderae has two pairs [27]. Two pairs of ABLKs are also seen in, e.g., crickets, crane flies, moths, and mosquitos, whereas there are three pairs in the first four abdominal ganglia of locusts and two in the following ganglia ( Figure 6B) and up to 10 pairs per ganglion in dragonflies [29,30,95,96]. In cockroaches and locusts, these ABLKs send varicose axons to the lateral heart nerves and transverse nerves, where neurohemal areas (perivisceral organs) are formed; moreover, spiracles receive LK axon terminations [27,30,97]. Although LK was originally isolated by means of its activity on hindgut contractions, no LK innervation of this tissue was detected [27], suggesting that this myotropic action is mediated by hormonal LK. Another difference to Drosophila is that the cockroach thoracic ganglia each have at least two pairs of LK-expressing interneurons that arborize widely in the lateral portions of the ganglia [27,30]. As in Drosophila, there are no LK expressing enteroendocrine cells (EECs) in the L. maderae intestine. However, there are bi-or multipolar LK neurons in the posterior midgut with ascending axons running via the esophageal nerve to end with arborizations in the frontal ganglion and tritocerebrum [27]. These might be proprioceptive cells that signal gut distension to the frontal ganglion and other feeding circuits. Additionally, LK-immunoreactive axons from the retrocerebral complex (in particular the frontal and hypocerebral ganglia) were found to innervate the pharynx and esophagus [27]. Mapping of LK neurons in the brain of the cockroach Nauphoeta cinerea revealed a similar set of neuron types [30]. Thus, taken together, the cockroach LK neurons are more diverse than those in Drosophila ( Figure 5A-D) and seem to underlie distributed functions in different brain/ ganglion regions. Such functions may include neuromodulation in the olfactory system, visual system, central complex, mushroom bodies, circadian clock, tritocerebral neuropil, circuits of the thoracic ganglia, and the frontal ganglion (regulation of feeding) [27]. The two pairs of protocerebral descending LK neurons ( Figure 5A,B), which span the entire ventral nerve cord, may provide a pathway for linking protocerebral and olfactory systems to regulate ganglionic activity. In addition, there are three types of neurosecretory cells producing LKs, namely, LNCs, MNCs, and the ABLKs, which probably release LKs into the circulation to target peripheral organs such as Malpighian tubules, heart, and visceral muscle. Furthermore, peripheral cells were found in the intestine of L. maderae that may be proprioceptors. Unfortunately, there are no data on any functions of LKs in cockroaches, except the stimulatory activity on the hindgut muscle in vitro [7,9]. Thus, we can only speculate that LK signaling in the cockroach is functionally more diverse than in Drosophila with its four neurons in the brain/SEZ and 22 ABLKs. The four brain/SEZ neurons of Drosophila ( Figure 5D) do not seem to have any obvious analogs in the cockroach brain, but there are three bilateral pairs of L. maderae LK neurons that could play roles similar to the pair of LHLKs (one is labeled LHn in Figure 5A,C). The SELKs are descending neurons in Drosophila with cell bodies and processes in the SEZ [33,51], whereas the cockroach descending neurons originate in the protocerebrum and innervate the antennal lobes on their descent ( Figure 5A,B). The LK-expressing LNCs of L. maderae may be analogous to the ALKs of Drosophila ( Figure 5D). These Drosophila ALK neurons can be seen in several Lk-Gal4 lines, but only in early larvae do they consistently label with antisera to LK [33,51]. These Drosophila neurons were first described as LNCs expressing ion transport peptide (ITP), a peptide that may act in regulation of thirst and hunger and probably also plays a role in ion transport in the intestine [98,99]. The Drosophila ALKs were also shown to express tachykinins (TKs) and short neuropeptide F (sNPF), and these peptides were found to regulate metabolic and desiccation stress responses [82]. It is not known whether the L. maderae LNCs express further neuropeptides, but possibly their functional roles are similar to those of Drosophila. On the basis of the anatomy and distribution of the cockroach LK neurons, one could speculate that some of the other LK functions determined in Drosophila also apply to L. maderae-roles in the circadian clock output and sleep, in feeding, and in regulation of water and ion homeostasis (see [17,51,[100][101][102][103]). Distribution of LK in Other Invertebrates: What Can Comparative Studies Teach Us? In the previous section, we described the LK neurons of the cockroach L. maderae with some comparative comments on Drosophila, two insects that highlight two extremes in terms of number and diversity of LK neurons. Here, we briefly summarize findings of interest in other invertebrates and discuss coexpression of LK and other peptides. LK in Neurons of the Brain of Other Insects LK distribution has also been described in the brains of several other insects, including the blood-sucking bug Rhodnius prolixus, the locusts Locusta migratoria and Schistocerca gregaria, the cricket Acheta domesticus, and the mosquito Aedes aegypti [30,31,34], which is summarized in Table 1. As an example, we show LK neurons in L. migratoria ( Figure 6A), where some interesting features differ from Drosophila and Leucophaea. Notes: x, present; −, not present; no annotation, not clear whether present or not (no statement is provided in papers). Acronyms: CB, central body; AL, antennal lobe; OL, optic lobe; LK, lateral horn; TC, tritocerebrum; SEZ subesophageal zone; DNs, descending neurons; LNC, lateral neurosecretory cells; MNC, median neurosecretory cells. 1 The majority of the LK cell bodies are in the protocerebrum and subesophageal zone (SEZ), but processes innervate neuropils in other brain regions. 2 In ALK neurons (LNCs), the LK expression is strong in larvae and weak and variable in adults. 3 The description of distribution of LK neurons and their processes is not detailed. The distribution of various neuropeptides has been extensively investigated in the locust brain, some in exquisite detail (see [104,105]), whereas the LK distribution has received more superficial attention. In the brain of L. migratoria, about 140 LK immunoreactive neurons were detected [34] (Figure 6A). Their cell bodies are primarily located in the protocerebrum, but about 5-6 pairs were detected in the tritocerebrum. No clear-cut neurosecretory cells were seen in the brain, but LK-expressing interneurons are associated with the optic lobe and the accessory medulla (pacemaker center of the clock), the central body, and antennal lobe [34,105]. As in the L. maderae brain, two pairs of descending LK neurons innervate the antennal lobes on their way to the ventral ganglia in S. gregaria [34,106]. There is an additional pair of larger tritocerebral descending neurons in L. migratoria [34] ( Figure 6A). Distinct LK immunolabeled processes can be seen in protocerebral neuropils such as the upper and lower divisions of the central body, the median and lateral accessory lobes of the central complex, and the protocerebral bridge, but not in the mushroom bodies. In the optic lobes, specifically the most basal portion of the lamina, different layers of the medulla (including the accessory medulla) and lobula contain LK fibers. A supply of immunoreactive fibers can also be seen in the glomeruli of the antennal lobe and many of the non-glomerular neuropils of proto-, deuto-, and tritocerebrum contain diffusely arborizing LK fibers. An interesting finding is that in S. gregaria a set of four SIFamide-expressing neurons in the pars intercerebralis of the brain colocalize LK [83] (see Figure 6A). As in Drosophila, the processes from these SIFamide neurons innervate most neuropil regions of the brain and ventral nerve cord [83,85]. The LK expression in these neurons is weak in adult locusts, but nevertheless suggests that LK may play a role in the signaling of these SIFamide neurons. The homolog SIFamide neurons in Drosophila are known in to orchestrate feeding, sleep, and mating in a nutritional state-dependent fashion [85,107,108]. Another interesting aspect of these SIFamide neurons in the locust is that they are identical to the LK-expressing primary commissure pioneer neurons (PCPs) that lay down an early axonal tract (commissure) in the brain of the locust embryo [83,84]. Since the LK immunolabeling was found stronger in the SIFamide neurons in younger stages than in the adult [84], it is suggestive that LK plays a role of during neuronal development and axonal pathfinding in the brain. In the brains of the cricket Acheta domesticus and the mosquito Aedes aegypti, the distribution of LK neurons is similar to that in L. maderae, with both LNCs and MNCs and their axon terminations in the corpora cardiaca expressing the peptide, but other interneurons were not described in enough detail for comparisons to be made [30]. The same authors found that there are no LK-immunoreactive neurons in the brain of the honeybee Apis mellifera, but only neurosecretory cells in the abdominal ganglia [30]. Finally, in the brain of the blood-sucking bug Rhodnius prolixus, about 180 pairs of LKimmunoreactive neurons were detected, 30 pairs of which were more strongly labeled [31]. These were later confirmed by in situ hybridization [109]. Processes of LK interneurons were seen widely in brain neuropils. In starved specimens, a set of MNCs and their processes in the corpora cardiaca could be detected with LK antiserum [31], suggesting LK expression is dependent on nutritional state and that this peptide plays a role as a systemic hormone. Injection of a biostable analog of an LK displayed decreased intake of blood in a feeding assay [110]. Furthermore, RIA of hemolymph demonstrated that LK is released after feeding [90]. In R. prolixus, LK does not display diuretic activity in the Malpighian tubules or anterior midgut (in contrast to, e.g., DH44), but it decreases the resistance and transepithelial voltage of the epithelium and also increases the frequency of contractions in the anterior midgut [31,111]. LK also induces contraction in the R. prolixus hindgut [109,110]. R. prolixus is the only insect that has thus far displayed LK-producing enteroendocrine cells in the midgut [31]. LK in Neurons of the Nervous System of Other Invertebrates The only phylum outside arthropods where bona fide LK distribution has been described is in mollusks. LK-expressing neurons in mollusks have been mapped for Lymnaea stagnalis, Helix pomatia, and Aplysia californica [22,44,112]. In the snail Helix, about 700 LK immunoreactive neurons were found in the CNS [112]. Buccal, cerebral, and pedal ganglia, as well as the viscero-parietal-pleural ganglion complex, all express LK in numerous neurons. One giant LK neuron was found in the pedal ganglion. Two major groups of LK neurons in the cerebral ganglia send axons into commissures to other ganglia and into several peripheral nerves [112]. Several peripheral tissues such as buccal mass, oviduct and intestinal muscle, and "skeletal" muscle (of foot, lip, and tentacle) are supplied by varicose LK axons. In addition, bipolar LK neurons were found in the intestine and were shown to send axons into the extensive meshwork of LK fibers seen there. Some groups of LK neurons in the cerebral ganglion coexpress tachykinin immunoreactivity [112]. It is not clear whether any of the LK neurons serve as bona fide neurosecretory cells, but it cannot be excluded that the abundant superficial LK axons in peripheral tissues might release LK into the circulation. In Aplysia, the majority of the LK neurons were found in the buccal ganglion, which is known to house feeding motor neurons and pattern-generating interneurons [22]. LK neurons were also seen in the cerebral ganglion, where higher-order feeding interneurons are located. These authors found that the buccal motor neuron B48 expresses LK and that application of this peptide ex vivo modulated a parameter of the consummatory feeding behavior [22]. One target of LK action is a central pattern generator element that modulates the duration of the protraction phase of feeding responses. Thus, this Aplysia study provides a mechanistic description of LK modulation of food ingestion, something that is lacking thus far for Drosophila and other insects. However, roles of LK in food consumption and post-feeding physiology have been demonstrated in Drosophila [51,103,113] and are suggestive in Rhodnius [31,111]. Neurosecretory Cells and Hormonal Roles of LK in Invertebrates One striking conserved feature is that all studied insects have segmental abdominal neurosecretory cells (ABLKs), varying in number between one pair per neuromere in Drosophila ( Figure 6C) and blowflies, to up to 10 pairs in dragonflies [28][29][30][31]96]. Commonly insects have two to three pairs per neuromere/ganglion (see [29,30,97]) ( Figure 6B). These neurosecretory cells have axon terminations associated with peripheral nerves (including lateral heart nerves), perisympathetic organs, and the body wall muscle of the abdomen. Since these abdominal cells are the only LK-expressing neurosecretory cells in several species studied, it is suggestive that these cells release LK as a circulating hormone. Thus, an important function of LKs is as hormones that act systemically, as diuretic factors, and that they are also likely to regulate gut contractions in some species (see [9,11,36,49,50,75,87,114,115]). As mentioned, LK release has been demonstrated in L. maderae, A. domesticus, and R. prolixus [32,89,90]. In several insect species, including Drosophila, Musca domestica, Manduca sexta, and Rhodnius prolixus, the abdominal LK cells coexpress the neuropeptide DH44 [31,87,116,117]. In Rhodnius, the DH44 stimulates secretion in the Malpighian tubules, whereas LK has no direct action on tubules, but may act elsewhere (e.g., anterior midgut and hindgut) to assist in rapid diuresis [31,110,111]. Both LK and DH44 are released after feeding in Rhodnius [90]. In Drosophila, on the other hand, both DH44 and LK stimulate secretion in the tubules, but by acting on different cell types and with different signal pathways downstream the receptors [17,75,87,118,119]. As mentioned above, some insect species possess additional LK-expressing neurosecretory cells systems in the brain. It is not known whether the cells of the brain and the abdominal ganglia (when both exist) play different functional roles, but it is at least likely that LK release in these cell groups are under control by different central neuronal circuits. It is also possible that in the LNCs other neuropeptides are colocalized with LK, as is the case in the Drosophila ALKs with additional TK, ITP, and sNPF [51,82]. For instance, in M. sexta and L. migratoria, sets of LNCs are known to produce ITP [120,121]. Interestingly, the only insect known that has LK-expressing endocrine cells in the midgut is R. prolixus [31]. Thus, LK is a rare peptide in intestinal signaling, in contrast to many other neuropeptides (see [122][123][124]). In crustaceans, LKs have not yet been detected in the canonical neurosecretory system, the X-organ/sinus gland of the eyestalks, or in the stomatogastric system [125]. However, in pericardial organs of the crab Cancer borealis, varicose LK immunoreactive axons were detected (probably derived from cell bodies in thoracic ganglia), suggesting that hormonal release of LK is possible [126]. Peptides from the pericardial organs are known to act as circulating hormones on circuits of the crab stomatogastric ganglion, and indeed shrimp LK applied to the ganglion has a distinct modulatory action on the pyloric rhythm of the network [126]. Not all arthropods seem to use LKs as hormones. In the spider Cupiennius salei, no LKimmunolabeled neurosecretory cells were detected, and actually the LK interneurons are not segmentally arranged, but the nine pairs of cell bodies are clustered anteriorly in the supraesophageal ganglion [42]. However, in another arachnoid, the tick Rhipicephalus appendiculatus, four pairs of neurosecretory cells located anteriorly in the prothocerebral lobe produce LK [127]. These cells have arborizing axon terminations in neurohemal areas in the neural sheath surrounding the CNS, and colocalize the neuropeptide myosuppressin. In mollusks, no bona fide neurosecretory cells producing LK have been described, but in the snail H. pomatia, sets of LK neurons clustered in cerebral ganglia have axons running out in several nerve roots to innervate peripheral tissues [112]. These peripheral varicose axons might release LK into the circulation, but further studies are required to verify this. Although LK has been demonstrated in annelids, such as Urechis unicinctus and Capitella teleta [56,128], there are, as far as we know, no reports on the cellular localization of the peptide. In the parasitic nematode Ascaris suum, LK immunoreactivity was detected in neurons [45], but no LK precursor gene has been identified in nematodes, and thus it is not clear what endogeneous peptide the antiserum recognized. Specific Roles of LK Signaling in Arthropods Here, we present a brief summary of the diverse functions of LK signaling in arthropods. Most of the recent work has been performed in Drosophila, but we will describe that only very briefly in Section 4.4.5, since a more detailed review on Drosophila will appear elsewhere. In Table 2, we list known functions of LK signaling in different insects and some other invertebrates. Notes: 1 The LK expression in SIFamide neurons is stronger during development, but remains throughout development and adult stage. 2 The Asian honeybee Apis cerana. Myostimulatory Action LKs act in vitro to increase frequency and amplitude of contractions in the hindgut of L. maderae [7,9] and the housefly Musca domestica [40], and in the anterior midgut and hindgut of the bug R. prolixus [109][110][111], but have no effect on neither hindgut nor oviduct contractions in the locust L. migratoria [36,49]. Diuretic Action A more widespread action is the stimulatory action of LKs on Malpighian tubules shown in [11,12,14,17,41,46,50,75,97,116]. In the studied insects, LKs activate the LKR, leading to an increase in intracellular calcium, which activates a chloride shunt conductance and water transport across the tubule epithelium [14,118,145,146]. In dipteran insects, such as Drosophila, Anopheles gambiae, and Aedes aegypti, this action is mediated by stellate cells of the tubules, which express the LKR [25,75,118,147]. LK signaling appears secondarily lost in most species of beetles (Coleoptera), and mining of the genome of Triboleum castaneum shows that other signaling systems known to be associated with diuretic functions in insects are greatly expanded [75]. Modulation of Sugar Gustation in the Mosquito Aedes aegypti and Asian Honeybee Apis cerana In females of the mosquito Aedes aegypti, application of a protease-resistant LK to the mouthparts and proleg tarsi resulted in inhibition of sucrose feeding and induction of an escape behavior, wherein the insect walked or flew away from the food [138]. It was shown that the LKR is expressed in chemosensory cells in proleg tarsi and labellar sensillae, and LK analog applied to mouthparts blocked the electrophysiological response to sugar in chemosensory sensillae. Furthermore, LKR-RNAi (RNA interference) by injection of double-stranded RNA eliminated the inhibitory effect of LK on sugar feeding [138]. This effect of a stable LK analog suggests a promising lead for a feeding deterrent in control of mosquitos as disease vectors [138]. Moreover, in the Asian honeybee A. cerana, sucrosesensing is modulated by LK signaling [76]. Knockdown of the LKR by RNAi decreased the sensitivity to sucrose in a proboscis extension response assay. Furthermore, the Lkr gene influences division of labor in foraging in these bees, and nectar foragers display lower Lkr expression than those foraging for pollen [76]. Feeding and Fecundity in the Cattle Fever Tick In the cattle fever tick Rhipicephalus microplus, silencing of the LKR by double-stranded RNA injection induced decreased egg production and hatching of eggs laid, and also delayed oviposition [144]. This effect appears to be indirect since the authors did not report expression of the LKR in ovaries but did report expression in the outer muscle layer of the midgut [144]. It was suggested that LK action on the gut affects gut motility and potentially uptake and processing of nutrients, and this in turn affects nutrient availability and fecundity [144]. An inhibitory effect of LKs on release of the digestive enzymes protease and amylase from the midgut was in fact shown in the moth Opisina arenosella [142], and myostimulatory effects of LKs are known in several insects [8,9,111]. It is possible that the LK action in the tick also includes the CNS, which could affect control of feeding and/or hormone release that reduces reproductive output. Feeding in Rhodnius prolixus and A. aegypti In R. prolixus and females of the mosquito Aedes aegypti, protease-resistant LK analogs reduce food intake when injected in the former and applied to the mouthparts and proleg tarsi of the latter [110,138]. Thus, LKs can have anti-feedant activity. Functional Roles of LK in Drosophila In recent years, Drosophila studies have employed genetic interventions and have revealed actions of specific LK neurons in the brain, SEZ, and abdominal neuromeres (Table 2, Figure 7). The two LHLK neurons ( Figure 5D) were shown to modulate metabolism-sleep interactions and serve as clock output [100][101][102][103], modulating state-dependent water and sugar-enforced memory [130], and probably food choice [131]. This pair of LHLK neurons also regulates insulin-producing cells, which may contribute to sleep-metabolism effects [51,103]. The abdominal ABLKs (see Figure 6C) regulate water and ionic homeostasis along with associated stress [51,87] and mechanosensory-induced defensive post-mating response in females [132]. Moreover, in Drosophila, LK modulates gustatory neurons, but it is not clear which neurons are responsible [139,141], although the SELKs are in a favorable position. The ABLKs co-express DH44 and specific knockdown of this peptide in ABLKs affect water and ionic homeostasis, as well as feeding [87]. The ALK neurons ( Figure 5D) are likely to signal with LK, ITP, sNPF, and TKs [51,82]. The function of LK in these cells is not yet known, but sNPF and TKs regulate metabolic and ionic stress responses [82], and ITP modulates water and ionic homeostasis, as well as feeding and drinking [98]. As seen in Figure 7, some of the functions of LKs appear conserved between Drosophila and other insects: clock-sleep functions, modulation of gustatory neurons, regulation of water and ion homeostasis, and possibly feeding. well as feeding [87]. The ALK neurons ( Figure 5D) are likely to signal with LK, ITP, sNPF, and TKs [51,82]. The function of LK in these cells is not yet known, but sNPF and TKs regulate metabolic and ionic stress responses [82], and ITP modulates water and ionic homeostasis, as well as feeding and drinking [98]. As seen in Figure 7, some of the functions of LKs appear conserved between Drosophila and other insects: clock-sleep functions, modulation of gustatory neurons, regulation of water and ion homeostasis, and possibly feeding. Figure 7. Summary of LK functions in Drosophila compared to other insects. In insects other than Drosophila, few functions have been explicitly determined (blue boxes), and most are suggested from LK expression (grey or black boxes). Red arrows indicate hormonal signaling, black arrows indicate established functions, and dashed arrows indicate suggested functions. In the mosquito A. aegypti, LK regulates sugar taste in gustatory receptor neurons (GRs) [138]; in the cockroach L. maderae and some other insects, intestinal contractions are regulated by LK; and in many insects, LK acts as a diuretic factor [75,133]. LK-expressing sensory cells in the intestine of L. maderae send axons to the frontal ganglion and brain, suggesting proprioceptive inputs [27]. In L. maderae, LK is present in pacemaker neurons of the clock circuit [27,93], and in In the mosquito A. aegypti, LK regulates sugar taste in gustatory receptor neurons (GRs) [138]; in the cockroach L. maderae and some other insects, intestinal contractions are regulated by LK; and in many insects, LK acts as a diuretic factor [75,133]. LKexpressing sensory cells in the intestine of L. maderae send axons to the frontal ganglion and brain, suggesting proprioceptive inputs [27]. In L. maderae, LK is present in pacemaker neurons of the clock circuit [27,93], and in the locust L. migratoria, LK is expressed in the four widely arborizing SIFamide-producing neurons and in circuits of the central body [34,83]. In several insects, including L. maderae, there are LK-expressing lateral and median neurosecretory cells indicating hormonal LK signaling from the brain [27,30,31]. In Drosophila, genetic interventions have revealed actions of specific neurons in the brain, subesophageal zone (SEZ), and abdominal neuromeres in several functional roles (blue boxes). These are metabolism-sleep interactions [100][101][102][103], food choice [131], water-and sugar-enforced memory [130], food intake, modulation of GRs [139,141], and water and ionic homeostasis [51,87,103]. One set of LK neurons (LHLKs) also regulates insulin-producing cells [51,103]. LK neurons expressing additional peptides contribute to other functions with non-LK peptides (red boxes). These are the ALK neurons that signal with LK, ITP, sNPF, and TKs and regulate metabolic and ionic stress responses (as well as feeding and drinking) [82,98], and ABLKs that also express DH44 and this peptide affect feeding and water balance [87]. Targeting the LK Signaling System with Peptide Analogs to Aim at Pest Control Neuropeptides regulate many vital processes in the daily life of insects such as development, growth, feeding, reproduction, metabolism, and water and ion homeostasis. These roles, taken together with the high specificity and activity at very low doses, render neuropeptides and their cognate receptors potential leads for the development of eco-friendly insecticidal agents [148][149][150][151][152][153][154][155]. Of the different peptides known, LKs have received considerable attention since the LK/LKRs signaling system seems to have no vertebrate orthologs and it plays a key role in regulation of many vital physiological and behavioral processes in insects, as shown in Section 4.4. In insects, LKs are multifunctional neuropeptides that share a common C-terminal pentapeptide sequence FX 1 X 2 WGamide, where X 1 can be H, N, S, A, or Y and X 2 can be S, P, A, or R (see Figure 1B); this pentapep-tide is also the active core of LKs, facilitating peptide design [40,152,156]. As noted in a previous section, LKs have been identified a wide range of insects (see the DINeR database: http://www.neurostresspep.eu/diner/infosearch), with the exception of most beetles (Coleoptera), all ants, and some wasps (Hymenoptera) [60,[62][63][64][65]67]. Since LKs are rapidly degraded by peptidases, analogs of insect LKs have been synthesized with a modified chemical structure to increase stability [152,156]. Replacement of the X 2 residue of the C-terminal pentapeptide core sequence (FX 1 X 2 WGamide) with an alpha-aminoisobutyric acid (Aib) resulted in resistance to hydrolysis by angiotensin-converting enzyme (ACE) and neprilysin (NEP) [157,158]. A rationale for this is that the X 2 position is the primary site of susceptibility to peptidase cleavage. Incorporation of a second Aib residue adjacent to the secondary peptidase hydrolysis site (N-terminal to the F residue) further enhances biostability [157]. These short LK analogs have activities that are similar to or exceed those of native insect kinins when tested on recombinant LKRs from the southern cattle tick Rhipicephalus microplus and the dengue vector, Aedes aegypti [59,[159][160][161]. Both in tissue bioassays and in recombinant LKR experiments in vitro, it was shown that that the F residue (in position one), W (in position four), and the amidated C-terminus of the LK pentapeptide core are crucial for LK activity [159,160,162]. Some modified biostable insect LK analogs have potential to be used in the integrated pest management because they reduce gain in body weight in corn earworm Helicoverpa zea larvae [157,163] and increase aphid mortality [164][165][166]. A biostable LK mimetic, (analog 1728; K-Aib-1), was shown to inhibit sugar taste receptors and act as a feeding deterrent in Aedes aegypti mosquitoes [138]. Moreover, in the bug R. prolixus, a stable LK analog displayed antifeeding activity after injection [110], and induced increased activity on hindgut contractions [109]. In female ticks, knocking down the expression of the LKR leads to a significant reduction of their reproductive fitness [144]. Hence, the tick LKR might be a promising target for developing more potent analogs. A recent study screened 14 predicted R. microplus LKs (Rhimi-K) and 11 LK analogs containing Aib and found that all of them were full agonists and displayed potent effects on the LKR of R. microplus [59]. These tick LKs and LK mimetics provide putative tools for tick physiology and management. However, the practical exploitation of the insect and tick LKs and LKRs for pest control is still in its early stages. More work is needed to solve the bio-stability, cost of production, and bio-safety of neuropeptide analogs, as well as to find efficient modes of peptide administration to target pest insects. Conclusions In this review, we have shown that expression of LKs is variable among invertebrates. Not only is it absent in many taxa, including some insect groups, but also its cellular expression varies between different insect species. Thus, there are 20 LK neurons of 3 major types in the CNS of the Drosophila larva (plus the enigmatic ALKs) and about 250 of multiple types in that of adult L. maderae [27,33]. A conserved feature is, however, the segmentally arranged neurosecretory ABLKs found in all insects studied (see [27,30,31,95]). This suggests that a hormonal role of LKs is a conserved feature among insects, and that a common action is to induce secretion in the Malpighian tubules [50,75] and potentially action on contractility and epithelial transport in the gut [9,111]. Most other functional roles of LK have been studied only in Drosophila, and thus it is not clear at this point to what extent further functions are conserved. However, as seen in Table 2, regulation of taste receptors and feeding, signaling in clock and sleep circuits, as well as gut function may be outputs of LKs in several invertebrate species. Interestingly, even amongst insects, genes encoding LK and LKR are lacking in many species. Is the lack of LK signaling compensated somehow? A clue can be obtained from looking at diuretic functions in beetles (Coleoptera) where most species have no LK signaling components. In the beetle Tenebrio molitor, genes encoding other diuretic hormones and their receptors (and associated downstream molecules) are upregulated, suggesting that peptide hormones are interchangeable to some extent [75]. This is also emphasized by the fact that LKs are strong diuretic factors in some insect species such as Drosophila and mosquitos, but have no direct action on diuresis in, e.g., Rhodnius [17,31,111]. Regulation of water and ion homeostasis is complex, with several peptide hormones involved [1,50,114,133]. In locusts and Drosophila, colocalized LK and DH44 activate different signaling systems downstream of their receptors, but act synergistically to induce secretion in tubules [87,114]. The interactions between LKs and other diuretic and antidiuretic hormones are not yet known, but it is likely that hormonal regulation of water and ion balance differs between different taxa both in terms of hormones involved and cellular mechanisms. Moreover, in the CNS, functions of LKs may be carried out by other neuropeptides when LKs have been lost (or never evolved), but what could be the significance of the larger number and diversity of LK neurons in the cockroach brain compared to that of Drosophila? Many neuropeptides act as local neuromodulators, often as cotransmitters of small-molecule neurotransmitters [78][79][80]167]. Thus, it is likely that LK produced in smaller interneurons of L. maderae serve local neuromodulatory/cotransmitter roles, similar to, for instance, TKs and sNPF in Drosophila [16,78,168,169]. In Drosophila, on the other hand, the four LK interneurons in the brain/SEZ have relatively wide arborizations and seem to play roles in orchestration of physiology and behavior. Clearly, we need more experimental data from other insects to be able to understand core functions of LK signaling and to further appreciate how some functions may have diversified during evolution. Finally, as described in the previous section, LKRs have been chosen as candidate targets for development of stable peptide mimetics for use in insect and tick pest control. Perhaps also development of small molecule ligands of LKRs would be useful in this quest to interfere with the vital LK signaling. Supplementary Materials: The following are available online at https://www.mdpi.com/1422-0 067/22/4/1531/s1: Figure S1. The LK precursor and LK peptides in the cockroach P. americana. Text File S1. Sequences of LKRs in different species used for the cladogram in
2021-02-11T06:16:34.734Z
2021-01-22T00:00:00.000
{ "year": 2021, "sha1": "0780d77a2f8ff0c4b19c1c3fb9a47cd7bf131448", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7913504", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2449cd77a395674ce7122aa4e9350353341e6bc5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219724250
pes2o/s2orc
v3-fos-license
Rg1 protects H9C2 cells from high glucose‐/palmitate‐induced injury via activation of AKT/GSK‐3β/Nrf2 pathway Abstract Our previous studies have assessed ginsenoside Rg1 (Rg1)‐mediated protection in a type 1 diabetes rat model. To uncover the mechanism through which Rg1 protects against cardiac injury induced by diabetes, we mimicked diabetic conditions by culturing H9C2 cells in high glucose/palmitate. Rg1 had no toxic effect, and it alleviated the high glucose/palmitate damage in a dose‐dependent manner, as indicated by 3‐(4,5‐dimethylthiazol‐2‐yl)‐2,5‐diphenyl tetrazolium bromide assay and lactate dehydrogenase release to the culture medium. Rg1 prevented high glucose/palmitate‐induced cell apoptosis, assessed using cleaved caspase‐3 and terminal deoxynucleotidyl transferase dUTP nick end labelling staining. Rg1 also reduced high glucose‐/palmitate‐induced reactive oxygen species formation and increased intracellular antioxidant enzyme activity. We found that Rg1 activates protein kinase B (AKT)/glycogen synthase kinase‐3 (GSK‐3β) pathway and antioxidant nuclear factor erythroid 2‐related factor 2 (Nrf2) pathway, indicated by increased phosphorylation of AKT and GSK‐3β, and nuclear translocation of Nrf2. We used phosphatidylinositol‐3‐kinase inhibitor Ly294002 to block the activation of the AKT/GSK‐3β pathway and found that it partially reversed the protection by Rg1 and decreased Nrf2 pathway activation. The results suggest that Rg1 exerts a protective effect against high glucose and palmitate damage that is partially AKT/GSK‐3β/Nrf2‐mediated. Further studies are required to validate these findings using primary cardiomyocytes and animal models of diabetes. cardiomyocyte apoptosis in DCM include hyperglycemia, hyperlipidemia, hypertension, oxidative stress and activation of the renin-angiotensin system, 3 but the specific mechanism is not clear. Ginsenoside Rg1 (Rg1), one of the critical, active components of ginseng extract, has a wide range of physiological activities and significant medicinal value. It was found that Rg1 has a protective effect on various tissues and organs of the human body and is anti-apoptotic, anti-inflammatory and anti-ageing. [4][5][6] In our previous study, we found that Rg1 reduces the level of oxidative stress and apoptosis of cardiomyocytes in the myocardium of diabetic rats. 7 However, the specific mechanism of prevention of DCM by Rg1 and the signal pathway involved are not clear. Phosphatidylinositol-3-kinase (PI3K)/protein kinase B (AKT) signalling pathway is involved in cell proliferation, differentiation, apoptosis and glucose transport, which are closely related to the occurrence and development of DCM. 8 Furthermore, several recent studies have shown that activation of the PI3K/AKT pathway may result in the up-regulation of nuclear factor erythroid 2-related factor 2 (Nrf2), which is an important antioxidant pathway. 9,10 However, it is not clear whether the protective effect of Rg1 is mediated by PI3K/AKT/Nrf2 signalling pathway. Therefore, we aimed to study the role of the PI3K/AKT signalling pathway in the prevention of high glucose and palmitate (G&P) damage by Rg1, and its relationship with PI3K/AKT signalling pathway and the activation of Nrf2. In our study, we aimed to demonstrate that Rg1 could alleviate the G&P damage in a dose-dependent manner and could protect against apoptosis and reactive oxygen species (ROS) production induced by G&P. We identified that Rg1 activates the AKT/GSK-3β and Nrf2 pathways, which in turn protects H9C2 cell apoptosis, induced by G&P. Inhibition of the PI3K/AKT pathway by Ly294002 partially abolished the protection of Rg1 against G&P injury and down-regulated Nrf2 expression. Thus, Rg1 provides a protective effect against G&P damage in H9C2 cells that is partially AKT/GSK-3β/Nrf2-mediated. | Materials Rg1 (purity > 98%) with high-performance liquid chromatographic analysis was obtained from the Jilin University School of Pharmaceutical Sciences. Ly294002 was obtained from Cell Signaling Technology. The Cell Proliferation Kit (MTT) was obtained from Sigma. | Cell culture and treatment H9C2 cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% foetal bovine serum (FBS), 2 mmol/L L-glutamine and 100 U/mL penicillin at 37°C in a humidified chamber containing 5% CO 2 . When cell populations reached 40%-50% confluence, the cultures were exposed to D-glucose at a final concentration of 22.5 mmol/L (high glucose) and palmitate at a final concentration of 50 μmol/L for 24 hours. The dose of glucose and palmitate was based on a previous publication. 11 In addition, some cultured cells were exposed to 5.5 mmol/L D-glucose as control. After treatment, the monolayer cultures were collected with a gum rubber scraping device and lysed using lysis buffer. Rg1 pretreatment was performed by exposing cells to different doses of Rg1 (0, 5, 10, 20 and 40 μmol/L) for 2 hours and then incubating with G&P for another 24 hours. In one inhibition group, H9C2 cells were pre-treated with 10 µmol/L Ly294002, a specific PI3K inhibitor (Cell Signaling Technology) at 37°C for 2 hours prior to the addition of Rg1, whereas the other inhibition group was treated with Ly294002 only. | Cell viability H9C2 cells were seeded at a density of 5 × 10 3 cells/well in 96well plates, and cell viability was determined using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay. The cells were incubated with G&P for 24 hours with or without pretreatment with various doses of Rg1 (0, 5, 10, 20 and 40 μmol/L) for 2 hours. Each well was washed twice with phosphate-buffered saline (PBS) to remove the medium before 10% MTT was added to each well and incubated for an additional 4 hours at 37°C. The absorbance was measured using a microplate reader at 490 nm and used as a measurement of cell viability. The absorbance was normalized to cells incubated in the control medium, which were considered 100% viable. | Lactate dehydrogenase release in culture medium We used Pierce Lactate dehydrogenase (LDH) Cytotoxicity Assay Kit (Thermo Fisher Scientific) to determine the LDH release into the culture medium as previously described. 12 Briefly, 50 µL of each sample medium was transferred to a 96-well flat-bottom plate in duplicate wells, mixed with 50 µL of the reaction mixture, then incubated at room temperature for 30 minutes in the dark, followed by the addition of 50 µL of Stop Solution to each sample well. Absorbance at 490 and 680 nm was measured using SpectraMax M Series Multi-Mode Microplate Reader (Molecular Devices) to quantify signal (490 nm) and noise (680 nm) absorbance. | Detection of intracellular reactive oxygen species Intracellular ROS levels were assessed using 2,7-dichlorofluorescein diacetate (DCFHDA) according to the manufacturer's instructions (Nanjing Jiancheng Bioengineering Institute), which forms dichlorofluorescein, fluorescent compound with ROS. H9C2 cells were preloaded with 10 μmol/L DCFH-DA for 30 minutes at 37°C, and then the plates were washed with PBS three times. Fluorescence was determined using a microplate reader with excitation/emission wavelength at 485/525 nm. | Histology H9C2 cells were washed with cold PBS and then fixed with 4% paraformaldehyde for 20 minutes. Subsequently, cells were permeabilized with 0.2% Triton X-100 and incubated with 5% bovine serum albumin to block non-specific binding, and then incubated with anti-Nrf2 (Abcam) overnight at 4°C in a humidified chamber. Then, the cells were incubated with Alexa Fluor 488-labelled goat anti-rabbit IgG antibody for 1 hour at 37°C, followed by incubation with DAPI (Thermo Fisher Scientific). Nuclear Nrf2 translocation was detected by fluorescence microscopy. | Statistical analyses Data are presented as mean ± standard error of the mean (SEM). Statistical differences were determined using two-sided, unpaired Student's t tests or two-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. P < .05 was considered statistically significant. | Protective effects of Rg1 against G&P-induced H9C2 cell injury We first assessed the toxicity of Rg1 on H9C2 cells, and we found no significant impact on cell viability after exposure to various doses of Rg1 (0, 5, 10, 20 and 40 µmol/L) for 24 hours ( Figure 1A). We noted dose-dependent protection of Rg1 against G&P injury indicated by cell viability and LDH release to the culture medium ( Figure 1B,C). | Protective effects of Rg1 on G&P-induced H9C2 cell apoptosis Many studies have demonstrated that apoptosis is the main cause of DCM. We evaluated the degree of involvement of apoptosis in G&P damage in H9C2 cells in vitro. We identified an increase in cleaved caspase-3 and Bcl-2-associated X protein (BAX)/B-cell lymphoma 2 (Bcl-2) ratio after H9C2 cell treatment with G&P for 24 hours, as well as protective effects of Rg1 indicated by suppressed cleaved caspase-3 expression and BAX/Bcl-2 ratio (Figure 2A). In addition, G&P increased the TUNEL-positive cell ratio, and Rg1 prevented G&P-induced apoptosis ( Figure 2B,C). | Rg1 reduced G&P-induced ROS formation and increased intracellular antioxidant enzyme activity Oxidative stress plays a vital role in DCM, and extensive ROS is also a main cause of cell apoptosis. 1 This suggests that Rg1 is associated with reduced oxidative stress in H9C2 cells. | Effects of Rg1 on AKT/GSK-3β and Nrf2 pathway in H9C2 cells PI3K/AKT pathway plays an essential role in many cellular processes, including proliferation, apoptosis and cell migration. 8 Previous studies have shown that this pathway is involved in high glucose-induced apoptosis. 13 The Nrf2 pathway has been shown to be up-regulated during the antioxidative response to cellular stress, including high glucose exposure. 14 Therefore, we evaluated whether the same is also involved in Rg1 protection on H9C2 cells exposed to G&P. We identified that G&P significantly decreased p-AKT, p-GSK-3β, Nrf2, HO-1, and NQO-1 expression, and Rg1 (at 40 µmol/L) significantly induced p-AKT, p-GSK-3β, Nrf2, HO-1 and NQO-1 expression ( Figure 4A,B). We stained H9C2 cells of different groups to co-localize Nrf2, and nuclear localization was observed by immunofluorescence to confirm if Nrf2 pathway was activated by Rg1. We found that Rg1 treatment increased Nrf2 nuclear translocation, and G&P had no influence on it ( Figure S1). | Effects of a PI3K inhibitor on H9C2 cells exposed to Rg1 and/or G&P We demonstrated that PI3K/AKT and Nrf2 pathways were activated by Rg1. We used PI3K inhibitor Ly294002 to treat H9C2 cells in the presence or absence of Rg1 and/or G&P to evaluate whether the PI3K/AKT pathway had a significant role in Rg1 defence against G&P damage. We found that Ly294002 partially reversed the protection of Rg1 against G&P injury as indicated by cell viability and LDH release ( Figure 5A,B). Western blotting results revealed that H9C2 cells exposed to G&P had a significant increase in cleaved caspase-3 expression compared with the control group, while those exposed to Rg1 had reduced cleaved caspase-3 expression compared with the G&P group. This effect was partially reversed by Ly294002 ( Figure 5C). | AKT/GSK-3β/Nrf2 pathway plays an important role in Rg1 action against G&P-induced H9C2 cell injury The transcription factor Nrf2 is an essential downstream target of the PI3K/AKT pathway. 15,16 As our previous results showed, AKT/ GSK-3β and Nrf2 pathways might be involved in the effect of Rg1 on G&P damage; we assessed the relationship between AKT/GSK-3β and Nrf2 pathways. We identified a significant decrease in p-AKT and p-GSK-3β expression after Ly294002 and Rg1 treatment compared with Rg1 group ( Figure 6A). The same pattern was found in the expression of Nrf2, HO-1 and NQO-1 ( Figure 6B). | D ISCUSS I ON Our previous study revealed that apoptosis plays an important role in the development of DCM in rats. 7 In this study, we identified that Rg1 is non-toxic to H9C2 cells at a dose of no more than 40 µmol/L ( Figure 1A), and the protection conferred by Rg1 on H9C2 cells that F I G U R E 6 AKT/GSK-3β/Nrf2 pathway plays an important role in Rg1 protection against G&P-induced H9C2 injury. H9C2 cells were pretreated with Ly294002 (10 μmol/L) for 2 h, then co-treated with Rg1 (40 μmol/L) for another 24 h. A, Expression of p-AKT, t-AKT, p-GSK-3β, t-GSK-3β in different groups (control; Rg1; Ly294002; Ly294002 + Rg1) tested by Western blotting; n = 5 per group. B, Nrf2, HO-1, and NQO-1 expression in each group (control; Rg1; Ly294002; Ly294002 + Rg1) tested by Western blotting; n = 5 per group. *P < .05 vs control group, #P < .05 vs Rg1 group F I G U R E 7 Schematic illustration of the mechanism by which Rg1 protects against G&P-induced H9C2 injury. G&P could inhibit AKT/GSK-3β pathway and induce H9C2 cell death, whereas Rg1 could activate AKT/GSK-3β pathway by phosphorylation, which in turn dissociates Nrf2 from KEAP1 (Kelch-like ECH-associated protein 1), transposes into the nucleus, and recognizes the appropriate antioxidant response element (ARE) sequence. As a result, it initiates the transcription of a series of antioxidative genes harboring ARE in the promoter region, including HO-1, NQO1 and CAT. These antioxidant products protect cells against oxidative stress-induced apoptosis AKT GSK-3β are exposed to G&P is dose dependent (Figure 1B,C). We also identified that apoptosis plays an important role in G&P damage, and Rg1 alleviates G&P-induced apoptosis (Figure 2A,B). In addition, Rg1 reduced G&P-induced ROS formation and increased intracellular antioxidant enzyme activity (SOD, CAT and GSH-Px activities) ( Figure 3). We also found that Rg1 could activate AKT/GSK-3β/ Nrf2 pathway (Figure 4, Figure S1) and partially abolish G&P injury ( Figure 5). PI3K inhibitor Ly294002 also down-regulates Nrf2 activation ( Figure 6) and partially reverses the protective effects of Rg1. Thus, Rg1 provides a protective effect against G&P damage in H9C2 cells that is partially AKT/GSK-3β/Nrf2-mediated. All the aboveassumed hypotheses are summarized in Figure 7, but remain to be examined in our future work. Rg1 is one of the most important active components in ginseng extract, which has substantial medicinal value and physiological activity. The protective effects of Rg1 on the cardiovascular system have been confirmed by many studies. 7,17,18 Several studies have shown that Rg1 has protective effects on diabetes-induced cardiomyopathy. 7,19 Protective effect of Rg1 on the heart has been shown to be anti-apoptotic and anti-oxidative stress. 17,20 Our previous study reported that Rg1 ameliorates diabetic cardiomyopathy by inhibiting endoplasmic reticulum stress-induced apoptosis in a streptozotocin-induced diabetic rat model. 7 To uncover underlying mechanism through which Rg1 protects against cardiac injury induced by diabetes, we mimicked diabetic conditions by culturing H9C2 cells in high glucose/palmitate and found that Rg1 showed significant protective effect on G&P damage by suppressing cell apoptosis ( Figure 2) and ROS production ( Figure 3A) and increasing intracellular antioxidant enzyme activity ( Figure 3B-D). This result agrees with what was observed in type 1 diabetes animal models. 20 PI3K/AKT is an important insulin signal transduction pathway. PI3K/AKT signal pathway is closely related to apoptosis. 21,22 After PI3K is activated, it acts on the second messenger on the cell membrane, combining with AKT, and promoting AKT activation. The latter can regulate multiple transcription factors through phosphorylation, especially GSK-3β, and is anti-apoptotic. 23 cells. We found that Rg1 significantly increased the phosphorylation of AKT and GSK-3β ( Figure 4A). Moreover, we used the PI3K inhibitor Ly294002 to confirm the role of the PI3K/AKT pathway in Rg1 protection against G&P injury. As expected, Ly294002 partially blocked Rg1 protection against G&P-induced injury ( Figure 6). These findings suggest that Rg1 promotes cell viability by activating the PI3K/AKT signalling pathway. A number of studies have confirmed that the loss of Nrf2 aggravates diabetes-induced cardiomyopathy. 36 | CON CLUS IONS In conclusion, we demonstrated that Rg1 inhibits and improves G&P injury in H9C2 cells. Our results indicate that G&P induce significant H9C2 cell death, and this is substantially reduced by
2020-06-18T09:05:05.430Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "148509af22cd21ff96046e9dc120df6f47af08e9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/jcmm.15486", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79e54a223aed62c6fe9b83d018dc6dc554c71501", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
231574583
pes2o/s2orc
v3-fos-license
Alveolar compartmentalization of inflammatory and immune cell biomarkers in pneumonia-related ARDS Background Biomarkers of disease severity might help individualizing the management of patients with the acute respiratory distress syndrome (ARDS). Whether the alveolar compartmentalization of biomarkers has a clinical significance in patients with pneumonia-related ARDS is unknown. This study aimed at assessing the interrelation of ARDS/sepsis biomarkers in the alveolar and blood compartments and explored their association with clinical outcomes. Methods Immunocompetent patients with pneumonia-related ARDS admitted between 2014 and 2018 were included in a prospective monocentric study. Bronchoalveolar lavage (BAL) fluid and blood samples were obtained within 48 h of admission. Twenty-two biomarkers were quantified in BAL fluid and serum. HLA-DR+ monocytes and CD8+ PD-1+ lymphocytes were quantified using flow cytometry. The primary clinical endpoint of the study was hospital mortality. Patients undergoing a bronchoscopy as part of routine care were included as controls. Results Seventy ARDS patients were included. Hospital mortality was 21.4%. The BAL fluid-to-serum ratio of IL-8 was 20 times higher in ARDS patients than in controls (p < 0.0001). ARDS patients with shock had lower BAL fluid-to-serum ratio of IL-1Ra (p = 0.026), IL-6 (p = 0.002), IP-10/CXCL10 (p = 0.024) and IL-10 (p = 0.023) than others. The BAL fluid-to-serum ratio of IL-1Ra was more elevated in hospital survivors than decedents (p = 0.006), even after adjusting for SOFA and driving pressure (p = 0.036). There was no significant association between alveolar or alveolar/blood monocytic HLA-DR or CD8+ lymphocytes PD-1 expression and hospital mortality. Conclusions IL-8 was the most compartmentalized cytokine and lower BAL fluid-to-serum concentration ratios of IL-1Ra were associated with hospital mortality in patients with pneumonia-associated ARDS. Background The acute respiratory distress syndrome (ARDS) is the most severe form of acute hypoxemic respiratory failure and affects 10% of all intensive care unit (ICU) patients. Despite advances in patient management during the previous decades, hospital mortality of ARDS remains as high as 40% [1]. As most pharmacological interventions tested in ARDS yielded disappointing results [2][3][4], the identification of biomarkers of disease severity that would be potential therapeutic targets or allow for individualizing patient management has become a major area of research. Indeed, combining plasma biomarkers and clinical variables Open Access *Correspondence: nicolas.de-prost@aphp.fr 1 Service de Médecine Intensive Réanimation, Hôpitaux Universitaires Henri Mondor, Assistance Publique-Hôpitaux de Paris, 51, Avenue du Maréchal de Lattre de Tassigny, 94010 Créteil Cedex, France Full list of author information is available at the end of the article has been shown to improve mortality prediction in ARDS patients [5] and allowed for identifying subphenotypes with different clinical outcomes and therapeutic intervention responses [6,7]. While blood has been the most common biological sample used to search candidate biomarkers, bronchoalveolar lavage (BAL) fluid is the closest sample to the site of injury and more accurately reflects the local lung environment [8], as illustrated by a pioneer study that identified BAL fluid-but not plasma-levels of IL-8 to predict ARDS development in at-risk patients [9]. In fact, no single biomarker obtained from blood samples has been shown to be consistently associated with outcomes in a recent systematic review [10]. This lack of association may be due to an alveolar compartmentalization of biomarkers during pneumonia-related ARDS. Pulmonary infections account for the vast majority of ARDS risk factors [11] and are associated with septic shock in about 70% of cases. In patients with septic shock, a sustained decrease in HLA-DR expression on circulating monocytes [12,13] was consistently associated with an increased risk of nosocomial infections [14] and a higher risk of death [14][15][16]. Programmed death receptor-1 (PD-1) is an inhibitory immune checkpoint receptor expressed on activated lymphocytes and myeloid cells, which participates to immune tolerance maintain [17]. Preclinical experiments using ARDS [18] models showed a survival benefit of PD-1 pathway inhibition, suggesting that PD-1 expression on immune cells could be an outcome biomarker in patients with sepsis [19][20][21] and ARDS [18]. Sepsisinduced defects in innate and adaptive immune cells were not only observed in blood but also in the lungs of patients dying from sepsis, illustrating that such immune alterations also occurred in situ, although the clinical significance of such regional alterations has not been established [22]. Monitoring blood monocyte HLA-DR expression has been previously used to guide targeted immunological interventions [23][24][25] and it has been speculated that the quantification of HLA-DR on alveolar monocytes [26] may enrich the identification of patients who might benefit from immunomodulatory interventions [16,27,28]. Better understanding the interplay of ARDS biomarkers between the alveolar and blood compartments seems a critical step to provide new insights into pathogenesis. In the current study, we aimed to assess in a prospective cohort of patients with moderate to severe pneumonia-associated ARDS: (1) the interrelation of ARDS/sepsis biomarkers in the alveolar and blood compartments, and (2) explore their association with clinical outcomes. Study design This prospective single-center observational cohort study was approved by the institutional ethics committee (Comité de Protection des Personnes Ile-de-France V, Paris, France, #13899). Consecutive patients diagnosed with pneumonia-related ARDS admitted to the medical ICU of Henri Mondor Hospital, Créteil, France, from January 2014 to December 2018 were eligible for inclusion in the study. Informed consent was obtained from all included patients or their relatives. Patients and data collection All patients with moderate/severe pneumonia-related ARDS [11] were included consecutively with the following inclusion criteria: tracheal intubation and mechanical ventilation since less than 48 h; pulmonary infection diagnosed less than 7 days before ICU admission; bilateral pulmonary infiltrates on chest X-ray; a PaO 2 /FiO 2 ratio ≤ 200 mmHg with a positive end-expiratory pressure (PEEP) ≥ 5 cm H 2 O. Non-inclusion criteria were as follows: age < 18 years; pregnancy; chronic respiratory failure requiring long-term oxygen therapy; Child-Pugh C liver cirrhosis; lung fibrosis; immunosuppression, SAPS II (Simplified Acute Physiology II score) > 90, irreversible neurological disorders, patients with withholding/withdrawing of life-sustaining therapies and profound hypoxemia (PaO 2 /FiO 2 < 75 mmHg). Control patients (i.e., non-mechanically ventilated patients free of ARDS or immunosuppression; n = 7) undergoing a bronchoscopy with bronchoalveolar lavage (BAL) and blood sampling as part of routine care were also included (Additional file 1: Table S1). None of the controls was receiving antibiotics at the time of BAL fluid and blood sampling. Demographics, clinical and laboratory variables upon ICU admission, at samples collection time points and during ICU stay were prospectively collected. The initial severity of ARDS patients was assessed using the SAPS II [29] and the sequential organ failure assessment (SOFA) scores. Other variables included the use of adjuvant therapies for ARDS (i.e., neuromuscular blocking agents, nitric oxide inhalation, prone positioning, extracorporeal membrane oxygenation), the need for hemodialysis or vasopressors, the administration of corticosteroids, the number of ventilator-and organ failure-free days at day 28 and ICU mortality. The clinical endpoint of the study was hospital mortality. ARDS patients received mechanical ventilation using a standardized protective ventilation strategy [30]. Other treatments, including neuromuscular blocking agents [31], nitric oxide inhalation [32], prone positioning [33] and extra-corporeal membrane oxygenation were administered depending on the severity of ARDS [34]. The prevention of ventilator-associated pneumonia followed a multifaceted program [35]; Sedation and mechanical ventilation weaning followed standardized protocols [36]. BAL fluid and blood sampling BAL fluid was collected and preserved undiluted from all ARDS patients during a bronchoscopy performed within 48 h of ARDS onset. BAL fluid samples were also collected from control patients. Concomitant blood samples were obtained in ARDS and control patients. During a standard flexible bronchoscopy, the bronchoscope was wedged within a bronchopulmonary segment. Four aliquots of normal saline (50 mL each) were instilled through the bronchoscope within the selected bronchopulmonary segment. After each aliquot was instilled, saline was retrieved using a negative suction pressure (BAL fluid return did not differ between ARDS patients and controls: median = 59 mL [first-third quartiles] mL versus 80 mL , p = 0.40). BAL samples were filtered through a 100 μm cell strainer, centrifugated and BAL cells were then collected in phosphate buffered saline solution. BAL fluid cytology was performed by direct microscopy after centrifuging bronchoalveolar lavage fluid samples (12,000 revolutions for 10 min) and dying under the May-Grünwald-Giemsa staining. Total (quantified in cells/mL) and differential (i.e., percent of neutrophils, macrophages and lymphocytes) cell counts were measured as recommended [37]. Blood and BAL fluid samples were shipped at room temperature to the cytometry platform and analyzed within two hours. BAL fluid and blood samples were centrifuged and supernatants were stored at − 80 °C for subsequent analyses. Flow cytometry analysis Blood and BAL fluid immuno-staining were performed as follows: 100 μL of whole blood or BAL fluid were incubated for 10 min at room temperature in the dark with the following conjugated-monoclonal antibodies: anti-CD3-AA750, anti-CD8-AA700, anti-CD279 (PD-1)-PC7 or isotype control, anti-HLA-DR-PB or isotype control, anti-CD14-ECD and CD45-Krome Orange (Beckman Coulter). For blood samples, red-blood cells were then lysed using VersaLyse Solution (Beckman Coulter). Washed blood and BAL fluid-stained samples were immediately acquired on a 10-multicolor Navios flow cytometer and analyzed with the Kaluza 2.1 software (both from Beckman Coulter). The gating strategy is depicted in Additional file 1: Figure S1 in BAL fluid (Panel A) and blood (Panel B). HLA-DR and PD-1 quantification were expressed in percentage of positive cells or mean fluorescence of intensity (MFI). Data presentation and statistical analysis Continuous variables are reported as median [1st-3rd quartiles] or mean ± standard deviation (SD), and compared using the unpaired Student t test or the Mann-Whitney test, as appropriate. Comparison of paired quantitative variables was performed using the Wilcoxon matched-pairs signed-rank test or two-way ANOVA with repeated measures when more than two groups were compared. Correlations between continuous variables were assessed using the Spearman method. Qualitative variables are expressed as numbers and percentages and compared with the Chi 2 or Fischer tests, as recommended. Uni-and multivariable logistic regression models were used to assess the relationship between BAL fluid-to-serum concentration ratios of biomarkers, BAL fluid-to-blood ratio of monocytic HLA-DR or T CD8 + lymphocyte PD-1 expression, as continuous variables, and hospital mortality (dependent variable). Adjusted analyses were performed including major prognostic variables defined a priori (i.e., SOFA score [38] and driving pressure [39]). No imputation of missing variable was performed. A p value < 0.05 was considered significant. Statistical analyses were performed using GraphPad Prism (version 8.0, GraphPad Software, Inc., San Diego, CA, USA) and R 3.1.2 (The R Foundation for Statistical Computing, Vienna, Austria). Initial presentation and outcomes of patients with pneumonia-related ARDS One hundred and eighty-eight patients with moderateto-severe pneumonia-related ARDS were admitted to the ICU during the four-year study period, of whom 118 had non-inclusion criteria and 70 were included in the study (Additional file 1: Figure S2). A microbiological documentation was obtained in 87% (n = 61/70) of included patients, 67% (n = 47/70) of whom had bacterial infections, and 26% (n = 18/70) had viral infections (four had bacterial and viral coinfections) (Additional file 1: Table S2). The comorbidities, clinical and biological characteristics of patients at ICU admission and at the time the first BAL was sampled (i.e., after a median delay of one day following intubation) are presented in Additional file 1: Table S1. In-hospital mortality was 21.4% (n = 15/70). Patients who were dead at hospital discharge did not exhibit more frequent ventilator-associated pneumonia episodes or septic shock during hospital stay but required more frequent renal replacement therapy than those who survived (Table 1). Biomarkers in the bronchoalveolar lavage fluid and serum of patients with pneumonia-related ARDS Biomarkers previously shown to be associated with key pathways involved in the pathophysiology of ARDS [8,10] were quantified in BAL fluid and serum samples obtained in average one day after intubation and compared with those of controls. As expected, dramatically higher concentrations of these biomarkers were observed in ARDS patients (Additional file 1: Figures S3a and S3b). Significant positive correlations were observed between BAL fluid and serum concentrations for most of the studied biomarkers (Fig. 1a). In an attempt to assess the alveolar concentrations of these biomarkers relative to those of serum, we computed BAL fluid-to-serum concentration ratios (Figs. 2a, b). Strikingly, BAL fluid-to-serum ratios of most of the measured biomarkers yielded values close to 1 in ARDS and controls, indicating no concentration gradient between the alveolar and blood compartments, while values greater than one were observed for SP-D, IL-6, IL-8 and IP-10/CXCL10. Of note, the only cytokine that showed a significantly higher ratio in ARDS patients than in controls was IL-8 (p < 0.0001, Fig. 2a), with measured concentrations which were 20 times as high in BAL fluid than in serum. We further investigated whether patients who had septic shock at the time the biomarkers were drawn exhibited different BAL fluid-to-serum ratios than others. Interestingly, most of the cytokines involved in innate immunity (i.e., IL-1Ra, IL-6, IP-10/CXCL10 and IL-10), together with Ang2, a biomarker of endothelial injury, showed significantly higher ratios in non-shocked versus shocked patients, indicating less alveolar compartmentalization of these biomarkers in shocked patients (Fig. 2b). An exploratory analysis assessing the prognostic value of the BAL fluid-to-serum ratio of these biomarkers indicated that IL-10, IL1-Ra, amphiregulin and RAGE were significantly associated with hospital mortality (Table 2). Yet, the only biomarker whose BAL fluid-to-serum ratio remained significantly associated with mortality after adjusting for admission SOFA and driving pressure was IL-1Ra (Table 2). A comparison of the area under the curves of receiver operating characteristic curves for serum versus BAL fluid versus BAL fluid-to-serum ratio of IL-10, IL1-Ra, amphiregulin and RAGE and hospital mortality consistently showed that BAL fluid-to-serum ratios had the strongest association with hospital mortality, except for serum RAGE levels that showed the same prediction performances than did BAL fluid-to-serum ratios (Additional file 1: Figure S4). Raw BAL fluid and serum biomarkers concentrations in survivors and decedents are shown in Additional file 1: Table S3. Cell surface biomarkers on bronchoalveolar and blood leukocytes of patients with pneumonia-related ARDS As expected in pneumonia-related ARDS, BAL fluid cellularity was elevated (median: 470 × 10 3 cells/mL [227-975] and differential cell counts showed a majority of neutrophils (69% , Table 1), consistent with alveolar inflammation. We quantified the monocytic expression of HLA-DR, a prognostic cell surface biomarker in septic shock patients [15], on bronchoalveolar and circulating monocytes, within 48 h of tracheal intubation. As compared with control patients, those with pneumonia-related ARDS exhibited significantly lower HLA-DR expression, both on circulating (p < 0.0001 when expressed in percentage of positive cells; Fig. 3a) and alveolar (p < 0.0001 when expressed in MFI; Fig. 3b) monocytes. ARDS patients also displayed dramatically higher HLA-DR expression, expressed both in percentage of positive cells (p < 0.0001; Fig. 3a) and MFI (p < 0.0001; Fig. 3b), on their alveolar than on their blood monocytes, consistent with the recruitment of activated monocytes in the infected lungs. Of note, there was a significant positive correlation between HLA-DR expression on circulating and on alveolar monocytes (Fig. 1b). The BAL fluid-to-blood ratio of HLA-DR monocytic expression was computed so that to better assess the compartmentalization of this biomarker during pneumonia-associated ARDS: when expressed in percentage of HLA-DR positive monocytes, the ratio was higher in ARDS patients than in controls. We also compared this ratio between patients with and without septic shock, as HLA-DR monocytic expression is an outcome biomarker in this specific group of patients, and observed that it was lower in the former than in the latter (Fig. 2b). There was no statistically significant association between HLA-DR expression on alveolar monocytes and hospital mortality, even after adjusting for potentially confounding variables (i.e., SOFA score and driving pressure, Additional file 1: Table S4). There was also no significant relationship between the BAL fluidto-blood ratio of HLA-DR monocytic expression and hospital mortality. There was a negative correlation between HLA-DR on alveolar monocytes and the SOFA score (Spearman r = − 0.42; p = 0.0003). Patients with pneumonia-related ARDS exhibited significantly higher PD-1 expression on both alveolar (p = 0.001) and blood (p = 0.022) T CD8 + lymphocytes than did control patients. Among ARDS patients, a higher expression of PD-1 was also observed on alveolar than on blood T CD8 + lymphocytes (p < 0.0001; Fig. 3c and p = 0.016; Fig. 3d), consistent with the recruitment of activated CD8 + lymphocytes at the site of infection. There was no statistically significant association between PD-1 on alveolar T CD8 + lymphocytes or the BAL fluidto-blood ratio of PD-1 + CD8 + cells (in percentage or MFI) and hospital mortality (Additional file 1: Table S5). Discussion The current study included 70 patients with pneumonia-related ARDS and quantified the concomitant concentration/cell surface expression of biomarkers in the bronchoalveolar and blood compartments. This was a cohort of homogeneous immunocompetent patients, all diagnosed with moderate-to-severe ARDS since less than 48 h when included in the study. The main results of the current study are as follows: (1) IL-8 had the highest BAL fluid-to-serum concentration ratio and IL-1Ra, IL-6, IP-10/CXCL10 and IL-10 showed higher lung/ blood concentration gradients in non-shocked than in shocked patients; ((2) in an exploratory analysis, IL-1Ra were associated with hospital mortality after adjusting for major confounding variables defined a priori (i.e., SOFA and driving pressure); and (3) HLA-DR expression measured within 48 h of intubation on monocytes and PD-1 expression on T CD8 + lymphocytes showed a lung compartmentalization, but were not associated with hospital mortality. The identification of reliable biomarkers constitutes a major area of research in ARDS to help predict its development, stratify disease severity into more accurate phenotypes, provide new insights into its pathogenesis and monitor response to treatment [8]. Although improvements regarding patient phenotyping have been made using multiparametric approaches combining clinical and biological variables [6,7], no single biomarker obtained from blood samples has been shown to ratios (a, b) and BAL fluid-to-blood cells ratio (c, d). ARDS patients (light red) are compared with controls (opened circles) (a, c); ARDS patients with shock (dark blue) are compared with ARDS patients without shock (light blue) (b, d). Symbols indicate median and bars show the 1st and 3rd tertiles. p values come from the Mann-Whitney test; *Concentrations of Serpin, RANTES, IL-7, VEGF and amphiregulin could not be measured in controls; BAL fluid-to-serum concentration ratios of IFN-γ and IL-10 could not be computed because serum concentrations equaled to zero in controls be consistently associated with outcomes [10], possibly because of a compartmentalization of biomarkers during pneumonia-related ARDS. In the current study, we explored the interrelation between alveolar and blood concentrations of biomarkers previously associated with ARDS and observed significant correlations between both compartments for most of the cytokines measured. Yet, alveolar concentrations of pro-inflammatory cytokines, including IL-6, IL-8 and IP-10/CXCL10, and of SP-D, were significantly higher than their serum concentrations, consistent with a lung borne production of these biomarkers, the most compartmentalized of which was IL-8, a potent neutrophil chemoattractant, confirming its pivotal role in ARDS pathophysiology [5,9]. Moreover, the fact that patients with shock had lower BAL fluid-to-serum concentrations ratios of the main pro/ anti-inflammatory cytokines (i.e., IL1-Ra, IL-6, IP-10/ CXCL10 and IL-10) suggests that less lung-compartmentalization of these mediators might be a mechanism leading to extra-pulmonary organ failures complicating the course of ARDS, as previously hypothesized [40,41]. The fact that lower values of the BAL fluid-to-serum ratio of IL-1Ra was associated with hospital mortality, even after adjusting for SOFA and driving pressure, reinforces this hypothesis. HLA-DR expression on alveolar monocytes of ARDS patients was lower than that of control patients, suggesting a down-regulation of HLA-DR expression in the infected lungs. Such finding mirrors the previously reported down-regulation of HLA-DR expression on circulating monocytes of patients with septic shock [13]. During septic shock, monocyte deactivation, defined as diminished antigen-presenting capacity reflected by the down-expression of HLA-DR, has been repeatedly associated with morbidity and mortality [14,15]. The decrease in HLA-DR expression on circulating monocytes is thus a robust predictor of outcome in septic shock patients, which can be restored by Table 2 BAL fluid-to-serum concentration ratios of cytokines and epithelial/endothelial injury biomarkers in pneumoniaassociated ARDS patients who survived (n = 55) to hospital discharge or not (n = 15) Variables are expressed as median [1st-3rd quartiles] of fluorescence intensity OR 95% CI odds ratio and their 95% confidence interval a Unadjusted p values come from the Mann-Whitney test b Adjusted p values yielding statistical significance at the p < 0.05 level come from multivariable logistic regression analyses and were adjusted for SOFA and driving pressure by multivariable logistic regression analysis; bolded results are significant at the p < 0.05 level immunostimulation with GM-CSF [24]. However, we did not observe a significant association between early HLA-DR expression on alveolar monocytes and hospital mortality. Few studies focused on the outcome impact of a decreased alveolar monocytic HLA-DR expression. Making the hypothesis that reversing HLA-DR down-regulation on alveolar monocytes would improve outcomes, Herold et al. administrated inhaled GM-CSF in six patients with pneumonia-related ARDS with documented decreased HLA-DR expression on alveolar monocytes, as a compassionate intervention [42]. In this pilot study, inhaled GM-CSF administration was associated with improved oxygenation and restored HLA-DR expression on alveolar monocytes, but the lack of control arm and the low number of patients treated precluded any firm conclusion to be drawn. Our data show that monitoring HLA-DR expression on alveolar monocytes during the first 48 h of pneumonia-related ARDS did not allow for identifying a subset of patients at higher risk of poor outcomes, thus suggesting this biomarker should not be used-at least during the early phase of ARDSto monitor regional immune status or guide therapeutic interventions. Interestingly, HLA-DR expression was higher on alveolar than on circulating monocytes in pneumonia-related ARDS patients. Such compartmentalization of HLA-DR expression has already been observed in septic shock patients [43]. The fact that alveolar monocytic HLA-DR expression was also lower in ARDS than in control patients is consistent with the recruitment of circulating monocytes into the alveolar space [44]. As expected, the SOFA score was negatively correlated with HLA-DR expression on alveolar monocytes, suggesting that the number of organ failures was associated with monocyte deactivation in the lungs, as previously shown in circulating monocytes of septic shock patients [14]. We also quantified PD-1 expression on alveolar and blood T CD8 + lymphocytes. Patients with pneumoniarelated ARDS exhibited significantly higher PD-1 expression on both alveolar and peripheral circulating T CD8 + lymphocytes than control patients. This is consistent with the work of Zhang et al. [45] reporting higher PD-1 expression on peripheral T cells of septic shock patients than on those of controls. Several studies reported that patients with septic shock and high levels of PD-1 expression on peripheral T lymphocytes were more likely to have an increased mortality and more occurrence of nosocomial infections [20] and Morrell et al. reported that PD-L1/PD-1 pathway-associated genes were significantly decreased in alveolar macrophages from ARDS patients who died or had prolonged mechanical ventilation [46]. However, we observed no significant association between PD-1 expression level on alveolar T CD8 + lymphocytes and outcomes. Additionally, patients with pneumonia-related ARDS had significantly higher PD-1 expression on alveolar than on blood T CD8 + lymphocytes. Such compartmentalization of PD-1 expression was already observed in preclinical experimental as well as in autopsy studies and may chiefly reflect the recruitment of activated lymphocytes at the site of infection [22,47]. Our study certainly has a number of limitations. This is a monocentric study including a homogeneous population of patients with pneumonia-related ARDS, thus limiting its external validity and the generalizability of the findings. The relatively small number of patients included precluded validating our results in an independent validation cohort, and the results of the conducted analyses, some of which would loss statistical significance after accounting for multiple testing, should be considered exploratory and interpreted with caution. Regarding the analysis of the relationship between biomarkers and hospital mortality, we have chosen not to control all statistical tests performed for multiple testing but instead preferred to adjust for prognostic variables defined a priori (i.e., SOFA score and driving pressure). Our control patients' population only included spontaneously breathing patients, not receiving antibiotics at the time of BAL fluid sampling, which might have contributed to between group differences. Other limitations of our study are the constraints associated with measuring BAL fluid-to-serum ratios of biomarkers, and limiting their analysis in "real life" conditions. We thus acknowledge the current study is more likely to have an impact on our understanding of the pathophysiology of the compartmentalization of biomarkers during ARDS than on clinical management. The flow cytometry gating strategy used for distinguishing alveolar monocytes from macrophages did not use antibodies for CD206 and CD169 [48] but identified side scatter intermediate (SSC), CD45 + and CD14 + cells. Although such methods were previously reported [49], we cannot exclude that our alveolar monocytes population was contaminated by macrophages. Last, we chose not to normalize BAL fluid concentrations of the studied biomarkers using BAL fluid-to-serum urea or albumin concentration ratios, as none of these methods has been shown to improve the accuracy of the measurements performed [50,51]. Our study also has some strengths, including a prospective design allowing for uniform timing of measurements at a clinically relevant time-point and the combination of clinical, flow cytometry and cytokines data. Conclusion In conclusion, this study showed that, in patients with pneumonia-associated ARDS, IL-8 was the most compartmentalized cytokine and that lower BAL fluid-toserum concentration ratios of IL-1Ra were associated with hospital mortality, even after adjusting for SOFA and driving pressure. In contrast, neither alveolar monocytic HLA-DR expression nor T CD8 + lymphocyte PD-1 expression were prognostic biomarkers.
2021-01-09T14:44:34.911Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "1784c570fcc2b6a8abdac3b12c04127e17fc2311", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-020-03427-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d3baffc1704fab88717c2185ca176304a51caa3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269716065
pes2o/s2orc
v3-fos-license
Implementing no-signaling correlations as a service We deal with no-signaling correlations that include Bell-type quantum nonlocality. We consider a logical implementation using a trusted central server with encrypted connections to clients. We show that in this way it is possible to implement two-party no-signaling correlations in an asynchronous manner. While from the point of view of physics our approach can be considered as the computer emulation of the results of measurements on entangled particles, from the software engineering point of view it introduces a primitive in communication protocols that can be capable of coordinating agents without revealing the details of their actions. We present an actual implementation in the form of a Web-based application programming interface (RESTful Web API). We demonstrate the use of the API via the simple implementation of the Clauser–Horne–Shimony–Holt game. The study of nonclassical correlations was triggered by the Einstein-Podolsky-Rosen paradox 1 raising a fundamental question of physics.The problem was first quantified by Bell 2 who studied a scenario of two separated parties are in hold of a physical system each, so that the two systems had interacted before.He pointed out that if the parties can choose between different measurements on their systems, the measurement results can show correlations that cannot be explained by the assumption of pre-shared randomness; this is reflected in the violation of certain inequalities.The underlying physical phenomenon is quantum entanglement 3,4 .Notably, the correlations obey the no-signaling property: they cannot be used for transmitting information between the parties.The first experiment to verify such correlations was proposed by Clauser, Horne, Shimony, and Holt 5 , however, it was an extremely hard task to produce such correlations with the technology of the 1960s. The evolution of lasers and nonlinear optics in the 1990's, notably the availability of entangled photon pairs 6 has brought Bell-type correlations to the forefront of research interest.The structure of quantum and generic nonlocal no-signaling correlations has been broadly studied and understood 7 .Device-independent quantum cryptography 8,9 , based on this kind of correlations, is now one of the most promising technologies, and a broad variety of protocols have been designed and demonstrated for numerous tasks, including secure key distribution 10 , bit commitment 11 , or digital signatures 12 .Quantum communication with satellites became now reality 13 , and quantum communication networks are being built 14 . Even though nonlocal no-signaling correlations have been discovered with motivations dominantly rooted in Physics, they are of interest per se, also in other scientific fields 15 .From a system engineering point of view, one can think of protocols in which there are connections between parties that do not facilitate communication but can coordinate actions of the parties.This can be relevant even when the formation of these correlations is not instantaneous and their implementation is carried out via the communication with a trusted server on encrypted channels.This is the approach we follow in the present paper: the nonlocal correlations are generated by allowing the software components that implement them to communicate with a central trusted server.We call this "logical implementation", as opposed to "physical implementations" based on quantum measurements. It is important to note that our implementation assumes the existence of a communication channel between the nodes and a central server, hence, the direct communication between the parties cannot be excluded by the laws of physics, unlike in the case of physical implementations.If, however, the parties' activity is restricted to use the implemented no-signaling correlations, these alone do not allow for any communication.Certainly this approach excludes applications aiming at the creation of encrypted channels, like quantum key distribution, however, there are many other possible applications to discover.Game theory 16,17 can serve as a guideline for designing such applications, in which the coordination without sharing local details is important.Supra-quantum no-signaling correlations, that is, those which cannot even be realized using quantum systems without interaction, are also important in theory 18 , and their possible applications.In the lack of an accessible implementation such applications have been hitherto largely unexplored. As for the technical implementation, our service relies on RESTful WEB API technology; the dominant one in network services currently.A software library can be easily developed in virtually any development environment or programming language that hides the otherwise simple details of low-level API operation.This facilitates the implementation, development, and testing of any protocol based on nonlocal no-signaling correlations, the development of computer applications using such resources, etc.This can be useful in the better understanding of actual experiments 10 or optimization of protocols 19 . From the point of view of physics, a logical implementation is a computer emulation of quantum correlation experiments or protocols, that, unlike physical implementations, requires interaction between parties and the formation of correlations is not instantaneous.However, the aforementioned library can be easily modified to use a physical device's API instead of the web-service based emulation.Recall that the ETSI standards for quantum key distribution have also resulted in a RESTful API specification 20 , and it has been an important step in the standardization of QKD technology to establish its specification, making quantum key distribution accessible for system engineers. It is likely that if the quantum technology to physically implement certain nonlocal no-signaling correlations will mature, the physical devices will practically appear in a way similar to our present implementation to a software developer.In this way an application developed using our framework can be easily modified to use physical hardware in the future as quantum communication devices become prevalent and affordable.Currently, on the other hand, it enables the development and testing of protocols without the need of the currently costly or not-yet-existent devices which can be readily converted to use new physical hardware as soon as it becomes actually available. Beside the system engineering aspects, the implementation results in a deeper understanding of no-signaling correlations, especially their asynchronous nature which is not frequently mentioned.While asynchronous nature is a straightforward consequence of the no-signaling principle, the no-signaling condition is essential for our particular implementation to work.In other words, the implementation of signaling correlations requires a different protocol.This aspect has motivated us in the discovery of the first such protocol in which the parties have to use their no-signaling resources in different order 21 . This paper is organized as follows.First we provide a brief introduction to the theory of no-signaling boxes.Then we describe our result which is the introduction of the notion of a "logical implementation", and the algorithm that realizes it.We then describe the methodology: the system architecture and the key details of implementation.Then we discuss a particular example in detail, which showcases our approach in action.This also demonstrates the use of nonlocal no-signaling correlations as a service in a protocol engineering scenario.Finally the results are summarized and conclusions are drawn. No-signaling boxes Consider two parties, Alice and Bob, who are physically separated from each other so the communication between them is excluded, apart possibly from the following.They have access to a device (or, more precisely, a pair of devices) which generates pairs of random variates (a, b) ∈ A × B so that the variate a is available only for Alice while the other variate b is available only for Bob.Each output pair depends on an input pair, too, so that Alice's input x ∈ X is entered by Alice locally, and so is Bob's y ∈ Y .The distribution of the output variates depends on the pair of (local) inputs (x, y) ∈ X × Y according to the conditional probability distri- bution P(a, b|x, y).Such devices will be termed as a "pairs of boxes", or simply a "box" in what follows.We will assume the sets A , B , X , Y to be finite.So far we allow for arbitrary correlations; many bipartite boxes would enable communication between the parties, though we will focus on those which do not in what follows. When a box is used multiple times the input pairs and the corresponding variate pairs have to be labeled.The labels k will be elements of an arbitrary index set K , and the tuple (a k , b k , x k , y k ) will be termed as the data of transaction k.(We note here that in some contributions a given transaction, i.e. a single use of a box is referred to as an instance of a box.When compared to those works, a "box" there is a "transaction" in our terminology.) We do not prescribe any ordering on the set K , albeit in practical realizations it is frequently related to time, e.g.due to a causal ordering.We assume that the probability distribution of the variates (a k , b k ) in a given trans- action k is entirely determined by x k , y k , and P(a, b|x, y), and is independent from any other inputs or outputs in the other transactions. Let us now restrict our attention to those boxes, which cannot be used for the parties to communicate.This is the exclusion of signaling: it implies that Alice and Bob cannot use their box to implement a communication channel solely by using the boxes.In mathematical terms this can be expressed with the following no-signaling conditions: and similarly Note that these conditions imply the existence of local marginals of the joint conditional probability distribution.Hence, it is possible to operate the boxes asynchronously: Alice can provide x k anytime, obtaining a k immediately, and the same holds for Bob, y k and b k .The times of when a party use the box in a given transaction, and thus the order of the uses is independent.This property can also give rise to interesting protocols 21 .In what follows we will restrict ourselves to no-signaling boxes. The notion of locality of a box pair is to consider those which can be realized with randomness shared in advance before the transaction.A scenario with such a box pair is illustrated in Fig. 1. Such boxes are described by conditional probability distributions that can be expressed as a convex combination of products of local deterministic boxes.A local deterministic box on Alice's side assigns a given a(x) to each x, whereas such a box at Bob's side assigns a given b(y) to each y; their product is the parallel application of the two.Such a pair of boxes has a deterministic (Dirac) conditional probability distribution.Randomness shared in advance enables the realization of any convex combination of these distributions without any communication between the parties.Such boxes are termed as "local". No-signaling boxes form a significantly larger set than that of the local boxes.Therefore there exist "nonlocal correlations" which are interesting both fundamentally and in applications.Some of these can be realized with physical arrangements (i.e.quantum mechanically) in such a way that there is no interaction needed between the parties when using the boxes.In such implementations, however, pairs of quantum systems in entangled state is to be shared in advance, similarly to the pre-shared randomness in the case of local boxes.This scenario is depicted in Fig. 2. The physical systems are particles; typically photons in case of many realizations.The parts of the system are initially at the same location, a source, and they are interacting, which results in an entangled state in some of their internal degrees of freedom, like the polarization of the two photons.The two subsystems are then sent to the parties Alice and Bob, who choose the measurements corresponding to the inputs x and y and carry them out on their particle to get a and b as the measurement result. Notably, the operation is instantaneous: the parties obtain the (correlated) results immediately after sending the input to the box on both sides, even if the parties' separation is space-like, and thus there is no way to communicate.Whenever Alice and Bob carry out their measurements, the results are readily available right after the completion of the measurement of each party; there is no need to wait the minimum time that would allow the two sites to communicate.(Recall that information can only be propagated at a limited speed.There will be a "local" reaction time of the box, but this can be negligible.)This feature can be important in certain applications 16 . The measurement by each party is done solely on the particle available to the given party.Thus there is no interaction or communication between the boxes at the parties (after sharing the pair of particles).Nor there is any interaction or communication between the boxes of Alice and Bob.Thus it is guaranteed that no other party will know about the particular values of x(y) and a(b) but Alice (Bob). Note also that from the no-signaling principle it follows that the two parties may carry out their respective measurements anytime, in arbitrary causal order, without synchronization.If Alice and Bob could store the particles for an arbitrarily long time, they could share enough entangled particle pairs in advance and they could choose freely when to make a given measurement.In practice, however, the coherence times of such particles is short, thus the entangled state is destroyed within a very short time.Hence, in practical scenarios they obtain the particles from a central source (e.g. via fibers or free-space propagation), and often there is a time synchronization to ensure that the measurements are associated with actual members of pairs.Hence, the realization of arbitrary timings, that is, using up pairs with different timing and ordering deliberately on the two sides has up Figure 1.A local box pair: it can be implemented with randomness shared in advance.The vertical dashdotted line represents spatial separation, whereas the horizontal one represent difference in time.Thus there are two phases: the preparation of the box and the actual use.In the second phase no communication is allowed between the boxes.to our knowledge not yet been explored in experiments, although it would not be impossible, apart from some challenges due to loss and decoherence. Boxes that can be realized physically include local boxes as a proper subset, and they are a proper subset of no-signaling boxes.The structure of the set of physically realizable boxes is defined by the laws of quantum mechanics, we will not go into detail but will show an example of this kind.Our logical implementation covers nonlocal no-signaling boxes in general. Results Our goal is to logically implement a pair of nonlocal no-signaling boxes whose behavior is described by a given conditional probability distribution P(a, b|x, y), so that it is accessible from software applications.From the point of view of quantum nonlocality, the logical implementation is a computer emulation of the behavior observed in the experiment.We define first what we mean by a logical implementation or emulation as opposed to the physical realizations.Then we describe the principle of the actual algorithm. Logical implementation In our scenario we accept that there is two-way communication between the boxes at the parties and a trusted server.This is depicted in Fig. 3. We require, however, that while the boxes themselves communicate, the parties cannot use the box pair for sending any information: the correlations are no-signaling from the actors' perspective.Otherwise speaking, the correlations themselves are nonlocal, regardless of the implementation.www.nature.com/scientificreports/When comparing with the physical implementation, as a trivial consequence of being generated via communication the central server will have all information about the results, and also there is a need to wait for the completion of the communication with the central server before the result becomes known, so the formation of the correlations is certainly not instantaneous.On the other hand, because of the no-signaling principle, no synchronization assumed and the set of the transactions K does not need to have a causal structure.As we will point out later, this feature is easily implemented in this framework. As an additional benefit, it is certainly possible to implement supra-quantum correlations; those which are no-signaling but cannot be realized quantum mechanically, such as a Popescu-Rohrlich (PR) box that will be described in detail later.Assuming that the server is trusted, the communication between the server and the boxes is secure, and that the parties use the boxes according to the prescription, such an implementation can be interesting per se.We conclude this subsection with tabulating the required resources and the features of the various implementations in Tables 1 and 2. Algorithm Let us now describe the actual algorithm of our implementation.A pseudocode for the algorithm is provided in Fig. 4. Assume first that Alice is the first to send her input, that is, she uses her box with the given transaction in time before Bob.(Recall that no synchronization is assumed but the transaction is uniquely identified by a value of k.)So Alice sends a particular x k value in transaction k to the box.The result a k of the box is drawn according to the local marginal where y ∈ Y is an arbitrary fixed y (due to the no-signaling condition in Eq. (1) any element can be chosen). The respective value of the random variate x k is sent to Alice, while the triple (k, x, a) is stored in the database. If Bob provides his input (k, y k ) later and asks for his output b k , it is a random variate drawn according to the conditional distribution where and the transaction is completed (after storing all details in the database).As the protocol is symmetric, when Bob is the first to initiate transaction k, the roles are reverted but the procedure is the same. In a software implementation it is therefore vital to ensure the following condition.When transaction k has been initiated by Alice, no reply to Bob can be generated before the transaction has concluded for Alice, that is, before a is generated and (k, x k , a k ) has been stored.The same holds for Bob's initiation of transaction k for (k, y k , b k ) .Using conventional relation database management, this can be ensured by locking the table of transac- tions, or at least transaction k whenever it is acted upon on behalf of either of the parties. Note that there can be two kinds of actions: if the transaction was already initiated by the other party then we use the joint probability with the known condition, whereas if it wasn't we just use the local marginal but keep the given input.Looking at the empirical marginals ex post, they will follow the local marginal distributions that exist because of the no-signaling condition. Methodology In this Section we describe the software architecture that has been used for the implementation.The components of the IT architecture are depicted in Fig. 5.The implementation is based on a central service run on a server.The service provides a RESTful API to clients, using HTTP GET requests with URL parameters, and returning the result in JSON format.(An example of a session will be presented later.) The server component realizes a component needed for user authentication and management, and a component that realizes the box emulator algorithm.Both of the components use the same underlying relational database which they communicate via its standard internal interface. The server component is implemented in Python programming language.It is based on SQLAlchemy 23 as an object-relational mapper and Flask 24 as the WEB API provider framework.The currently running beta version uses PostgreSQL 25 as a relational database manager.The random variates used by the server at the time of writing this paper are obtained from a "Quantis" USB Quantum Random Number Generator, model "USB-4M", manufactured by "ID Quantique" 26 with the serial number 184443A410.The Python library for accessing this device was also developed in the framework of the present project 27 .At the time of the publication of this article as an e-print, the beta version will be available for the public after a free registration, for academic and educational purposes 28 . Owing to the use of a standard API, a client can be any device running any software that is capable of consuming RESTful APIs at a basic level.Hence the possible client implementations and devices range from tutorial codes in various programming languages through smartphone applications to test cryptographic protocol implementations.A screenshot of a simple desktop graphical user interface is to be found in Fig. 6. average payoff of 1, is also tabulated in Table 3; is called the Popescu-Rohrlich box.If they both feed their box with their inputs x and y and they provide the respective output a and b as the output, they will be positively rewarded in all the cases.It can be shown, however, that the access to such a box pair does not enable to send any message or signal to the other.They get, however, coordinated without communication. Let us now see how the game-play is actually implemented using the API calls.(We will use the curl command available on most Linux systems to communicate the API with GET request.Alternatively, the URL can be written into the browser.)www.nature.com/scientificreports/Note that both parties obtain a uniformly distributed random result for their inputs, when observed just locally.However, when analyzed together, the expected joint conditional probability of the Popescu-Rohrlich nonlocal box can be observed. To demonstrate and verify this we have performed a systematic test of the API; a virtual Bell-experiment.The code of the test is available so that the test can be reproduced; the documentation of the test contains all technical details.To run the test, a pair of API keys is needed, it assumes to be run by two one the role of Alice, the other that of Bob.At the time of writing the test code supports two-input-two-output boxes.We have tested on a PR-box but the can be done with any other of these. The test can be run as follows.In a preparation phase, Alice and Bob create a box, e.g. a Popescu-Rohrlich box to be tested.They agree on the box ID.In the first phase of the test each party runs a program which carries out a number of "experiments" consisting of a number of transactions.The tests can certainly be run on separate computers.Within the measurement, a sequence of transaction IDs is generated.The transactions are, however, executed in a random order different at each party in order to verify the asynchronous operation.In each transaction, both parties generate a random input bit locally, and obtain the output from the API.The results of these "measurements" are saved into files.In the evaluation phase the saved test results are collected to the same computer and the empirical joint conditional probability distributions are evaluated.The empirical probability distribution should agree with the expected behavior. We have carried out such a test to verify the proper operation of the simulation.In particular we have verified the operation of a PR-box whose behavior (i.e.theoretical conditional probability distribution) is tabulated in Table 3.In Table 4 we present the result of 5 experiments, with 40,000 measurements each.The inputs at the measurements are uniformly distributed both on Alice's and Bob's side, hence, in each experiment there are about 10, 000 samples for each distribution. We have found that the events with zero probability according to theoretical conditional probability distribution never occur in the samples, which is not unexpected: it should be so by the construction of the algorithm generating the data.Once the input pair (x, y) is given, there are two possible outcomes remaining with equal probabilities, hence we are testing whether the respective part of the sample is drawn according to a Bernoullidistribution with equal probability of the two events.In fact the algorithm generates the respective random bits directly, hence the present test is essentially a direct test of the random generator used as a source of random bits in our implementation. The empirical distributions, i.e. the relative frequencies of the outcome pairs are apparently close to the uniform distribution.In order to quantitatively verify whether the API realizes the expected random behavior, we apply a standard χ 2 statistical test using the implementation in the Python SciPy package 29 for each (x, y) Table 4. Result of 5 experiments with a Popescu-Rohrlich box, with 40, 000 measurements (transactions) in each experiment.The first column is the ordinal number of the experiments, the second two are Alice's and Bob's input respectively.The next column is the number of transactions with this input pair.The next four columns contain the empirical probability distribution q.The last column is the p-value of the χ 2 test on the support of the probability distributions.Note that the events displayed with 0.0000 probability are the events with zero probability in the theoretical distribution, and indeed they never happened in any of the experiments. Exp. x y N q 00 q 01 q 10 q 11 p-value The zero probability events never occur, in complete agreement with the theoretical distribution, the χ 2 test should be restricted to the support of the probability distribution.The number of samples is set to a high value as the 2 test is better done large samples.(The empirical distribution is similar to the presented one already after drawing a few hundred samples, it does not yet prove the appropriate behavior in a statistical sense.)The data of Table 4 convincingly prove that the API works as expected.We have published the code 30 implementing the whole testing process, including the creation of the box, the experiment, and the evaluation in the form of scripts. Conclusions We have reported on the design and implementation of a RESTful WEB API service that implements nonlocal no-signaling correlations logically.Thereby it is capable of emulating nonlocal quantum correlations that are perhaps the most intriguing features of quantum mechanics and are essential ingredients of most applications in quantum information and communication, notably in device independent quantum cryptography.The described web service has also been implemented by us and we made it available to the community 28 . From the point of view of scientific research, one of our contributions is the algorithm that implements nosignaling correlations using a central trusted resource: we have not seen it before in the literature.The discussion of the asynchronous nature of no-signaling correlations can also be considered as a minor contribution of this kind.While it is mentioned in some previous contributions, probably because of its difficult implementation in quantum experiments, it gained less attention before.During the development we report here, this has lead us to finding the first known application in which the non-sequential use of nonlocal no-signaling resources is useful 21 . From the technological point of view our contribution makes nonlocal no-signaling correlations readily available using the perhaps most commonly used web service technology.Recall that trusted elements are involved also in practical quantum key distribution.Secure application entities, for instance, receive quantum keys from key management entities via RESTful APIs according to the ETSI-014 standard 20 ; all these elements are all considered as trusted.As opposed to that, our setup does not enable the remote parties to set up secure channels, and the implementation of the no-signaling correlations is based on the communication with a trusted server.If, however, the parties are not allowed to use any other means of communication to the server or any other party than the API calls, this alone will not enable them to build a working communication channel.The possible practical use of such a resource has not yet been considered.No-signaling correlations may find their use in the engineering of communication protocols: they can facilitate the coordination of actions without revealing the details of the decisions of parties.In addition, a service emulating quantum correlations can be used as a test and development environment for applications, even those designed for physical realizations of quantum correlations; device independent cryptographic protocols for instance.The API technology paves the way of designing a broad range of applications ranging from demonstrations on various platforms as well as practically useful ones, possibly. Finally, from the dissemination point of view we believe that our API is an enabler in the experience-based teaching of nonclassical correlations and Bell-type quantum phenomena.At the time of writing of this paper, this application of our API is being tested in a high-school environment, and a mobile phone application is planned to increase the dissemination impact.We hope that these will help a number of people to understand the basics of the phenomena whose experimental study has lead to the Nobel prize in Physics awarded in 2022 31 . Figure 2 . Figure 2. A quantum box pair, compare also with Fig. 1.The circles with upwards arrows inside represent quantum systems (e.g.particles); they are shared in advance.Due to their interaction at the source, they form pairs which are entangled.This enables the realization of nonlocal no-signaling correlation, albeit not the most general ones. Figure 3 . Figure 3.The nonlocal box emulation scenario.A bidirectional communication is allowed between the boxes of Alice and Bob, via an encrypted channel, with a trusted server, during the whole process of using the box.Meanwhile Alice and Bob are still not able to communicate each other by using the box. Figure 5 . Figure 5. UML 22 component diagram of the software architecture. Figure 6 . Figure 6.A simple desktop graphical user interface for the logical nonlocal box implementation. Table 1 . A comparison of resources required to realize a local, a quantum, and a logically implemented generic no-signaling box pair. Table 2 . A comparison of features offered by a local, a quantum, and a logical (emulated) no-signaling box pair. Figure 4.The pseudocode of the API call useBox on Alice's side.Let x k ( y k ) denote Alice's (Bob's) input.If Bob is the first, then the roles are reverted.Vol:.(1234567890)Scientific Reports | (2024) 14:10756 | https://doi.org/10.1038/s41598-024-59492-8www.nature.com/scientificreports/ input pair in each experiment.The test yields the p-value, a parameter between 0 and 1.It is commonly accepted that if this parameter is in the range [0.05, 0.95] then the test is passed: the results are really random and the distribution really belongs to the equal probability of the two events.
2024-05-12T06:16:02.570Z
2024-05-10T00:00:00.000
{ "year": 2024, "sha1": "5db0c4c565bbde7820688c0983cf9e8881ee6fe4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "56916b57e051d3f088ec590e75b4a533b0ec26ae", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
14677696
pes2o/s2orc
v3-fos-license
Linear antenna array optimization using flower pollination algorithm Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance. In this paper, a new nature-inspired evolutionary algorithm, flower pollination algorithm (FPA) (Yang 2012;Yang et al. 2014) is proposed for linear antenna array optimization. FPA is a metaheuristic algorithm inspired by the pollination process of flowering plants. It was developed by Xin-She Yang in 2012 (Yang 2012). FPA has been applied to solve practical optimization problems in engineering (Yang et al. 2014) such as disc brake design, spring design optimization, welded beam design, speed reducer design and pressure vessel design. FPA has also been used in areas like solar PV parameter estimation (Alam et al. 2015), fuzzy selection for dynamic economic dispatch (Dubey et al. 2015), etc. However, to the best of the authors' knowledge, this is the first time that FPA is being proposed for linear antenna array synthesis. In this paper, FPA is applied to linear antenna array in order to obtain array pattern with minimum SLL. In addition, nulls are placed in desired directions by optimizing the spacing between the antenna array elements. Furthermore, in this paper, the design problem of minimization of peak SLL, and that of imposing deeper nulls in the interference directions under the constraints of a reduced SLL of linear antenna arrays is modeled as an optimization problem. To solve this design goal, the flower pollination algorithm (FPA) is used to determine optimum antenna positions in the array. This section has presented a brief introduction to linear antenna array, the FPA and its applications in optimization problems, and the main objective of this work. The rest of the paper is organized as follows: the linear antenna array geometry, configuration and array factor equations are discussed in "Linear antenna array" section. "Flower pollination algorithm" section presents an elaborate description of the flower pollination algorithm along with a flowchart outlining the steps of FPA implementation. Various design examples for linear array synthesis, and the FPA optimized antenna locations and corresponding array patterns are put forward in "Results and discussion" section. The validation of the obtained results, when compared to other nature-inspired evolutionary algorithms, is also presented in this section. "Conclusion" section offers the conclusion. Linear antenna array A linear antenna array of 2N isotropic elements placed symmetrically along the x-axis is considered in this work, as illustrated in Fig. 1. where I n , ψ n and x n are the excitation amplitude, phase and position of nth element in the array. k is the wave number and is given by 2π/λ and θ is the azimuth angle. It is assumed that the antenna array is subjected to uniform amplitude and phase excitation, that is, I n = 1 and ψ n = 0. Thus, the AF in (1) gets modified to (2). (1) AF (θ ) = 2 N n=1 I n cos (kx n cos (θ ) + ψ n ) The objective of this work is to apply the flower pollination algorithm to determine the optimized element positions,x n , in order to achieve an array pattern with minimum SLL as well as placement of nulls in desired directions. In linear antenna arrays, proper placement of antennas is very essential. If the antennas are placed too close to each other, it leads to mutual coupling effects. On the other hand, if the antennas are placed too far away, it leads to grating lobes. Thus, while solving this optimization problem, the following conditions must be satisfied: Flower pollination algorithm Inspired by the pollination process of flowering plants, the flower pollination algorithm (FPA) was developed by Xin-She Yang in 2012 (Yang 2012). FPA is extensively used for optimization of multi-objective real-world design problems (Yang et al. 2014). FPA is based on the following four rules (Yang 2012): (i) Biotic and cross-pollination can be considered processes of global pollination, and pollen-carrying pollinators move in a way that obeys Levy flights. (ii) For local pollination, abiotic pollination and self-pollination are used. (iii) Pollinators, such as insects, can develop flower constancy. This in turn is equivalent to a reproduction probability that is proportional to the similarity of two flowers involved. (iv) The interaction or switching of local pollination and global pollination can be controlled by a switch probability p ∈ [0, 1]. The basic parameters of FPA are defined as follows (Yang 2012;Yang et al. 2014): (2) AF (θ ) = 2 N n=1 cos (kx n cos (θ)) 1. Population Size (n): FPA is a population-based metaheuristic algorithm in which candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Thus, FPA uses a population of (n) flowers/pollen gametes with random solutions as the starting point. 2. Switching Probability (p): Flower pollination activities can occur at both scales, local as well as global. However, the probability of local pollination is slightly higher than global pollination because adjacent flower patches or flowers in close vicinity are more likely to be pollinated by local flower pollen than those far away. To mimic this feature, a switching probability or proximity probability (p) can be effectively used to switch between common global pollination and intensive local pollination. This switching probability is slightly biased towards local pollination. During the execution of FPA, a random number between 0 and 1 is generated and is compared with the switching probability. If this number is less than (p), then global pollination is performed, otherwise local pollination is carried out. 3. L(β): In the case of global pollination, flower pollen gametes are carried by pollinators, such as insects over long distances due to their ability to fly. The strength of the pollination is modelled by L(β) which is a step-size parameter, more specifically the Levy-flights-based step size. Since insects can travel extensively with various distance steps, a Levy flight is used to mimic this characteristic efficiently. 4. γ: It is used as a scaling factor to control the step size of the Levy flights for global pollination. 5. ε: For local pollination, pollen is selected from different flowers of the same plant species or from the same population. This mimics the flower constancy in a limited neighborhood. ε is drawn from a uniform distribution [0,1] so as to mimic a local random walk. The parameters used in FPA along with their corresponding value/range are described in Table 1 (Yang et al. 2014). The implementation of FPA begins with the definition of the objective function and initialization of the population of flowers (n) with random solutions. The best solution in the initial population is computed. A switching probability [p ∈ (0, 1)] is defined. It controls the selection of either local pollination or global pollination. The choice between global pollination and local pollination is determined by generating a random number. If this random number is less than the switching probability (p), then global pollination is performed using (3). Otherwise, local pollination is carried out using (7). The mathematical representation of global pollination [rule (i)] and flower constancy [rule (iii)] (Yang 2012) is given by (3). where x t i is the solution vector x i at iteration t, and g best is the current best solution. γ is a scaling factor to control step size. L denotes the Levy flights-based step size, which corresponds to the strength of the pollination. Since insects may travel over long distances with varying distance steps, a Levy flight can be used to model this characteristic efficiently. L is drawn from a Levy distribution by using (4). Γ (β) is the standard gamma function. Mantegna proposed a fast and accurate algorithm to generate a stochastic variable whose probability density is close to a Levy stable distribution (Mantegna 1994). The required Levy stable stochastic process is generated in a single step by this algorithm. The pseudo-random step size (s) which obeys Levy distribution is drawn by using Mantegna algorithm for two Gaussian distributions U and V as follows in (5). U and V are drawn from a Gaussian normal distribution with a zero mean and variance σ 2 given by (6). For local pollination, the following mathematical formulation is used (Yang 2012). where x t j and x t k are pollen from different flowers of the same plant species. If x t j and x t k are selected from the same population, this is equivalent to a local random walk given that ε is obtained from a uniform distribution in [0,1]. The basic steps of FPA are illustrated in the flowchart depicted in Fig. 2. Results and discussion In this section, the FPA is applied to linear antenna array in order to determine the optimized antenna element positions to minimize the peak SLL, and to place nulls in desired directions. In design example A, the optimized antenna element locations are determined to minimize the peak SLL in the specified spatial region. Design examples B and C illustrate the application of FPA to determine the optimized antenna element positions in order to minimize SLL as well as place deep nulls in the desired directions. The FPA is implemented on MATLAB ® and executed 15 times. The number of iterations for each run is set equal to 1000. All results were obtained using n = 25, β = 1.5, p = 0.8, and γ = 0.1. Peak SLL minimization The fitness function used for the minimization of peak SLL is formulated as given by (8 Table 2 and the array pattern is illustrated in Fig. 3. For benchmarking purpose, the peak SLL obtained for this design example using the proposed method (FPA) and other nature-inspired optimization techniques is summarized in Table 3. In comparison to (non-optimized) conventional arrays, and arrays optimized using other opitimization algorithms such as PSO (Khodier and Christodoulou 2005), ACO (Rajo-Iglesias and Quevedo-Teruel 2007) and CSO (Pappula and Ghosh 2014), the proposed approach (FPA) shows a marked reduction in SLL. The proposed method (FPA) gives a peak SLL of −23.45 dB. This is 10.22 dB lower in comparison to conventional array. The peak SLL has been lowered from −20.72 SLL minimization along with null placement The fitness function used for SLL minimization as well as for placement of nulls in desired directions is formulated as given in (9). where, θ li and θ ui are the spatial regions in which SLL is suppressed and �θ i = θ ui -θ li . The null direction is given by θ k . In (9), the first term of the fitness function is for SLL suppression and the second term accounts for the placement of nulls in desired directions. Design example B This design example illustrates the synthesis of 28 element linear antenna array in order to achieve SLL minimization in the regions θ = [0°,84°] and θ = [96°,180°] along with null placement at θ = 55°, 57.5°, 60°, 120°, 122.5° and 125°. The fitness function used by the FPA for this design example is given by (9). The array pattern is shown in Fig. 4, and the optimized positions of the antenna elements are given in Table 4. It is seen from Fig. 4 that the proposed method using FPA enables the placement of deep nulls (as deep as −95.12 dB) at desired directions. The null depths obtained by the proposed method using FPA at each of the specified directions are summarized in Table 5. The comparative analysis of minimum null depth and peak SLL obtained using the proposed method (FPA) and various other stateof-the-art optimization algorithms is shown in Table 6. It is seen that for this design example, the minimum null depth obtained by using FPA is −89.42 dB. This implies that the obtained nulls are at least as deep as −89.42 dB. There is an improvement of around 39 dB in null depth obtained using PSO (Khodier and Christodoulou 2005) and ACO (Rajo-Iglesias and Quevedo-Teruel 2007). Compared to CSO (Pappula and Ghosh 2014), the proposed FPA approach improves null depth by around 24 dB. The peak SLL obtained using the proposed method (FPA) is −20.46 dB, which is about 7.23 dB lower than conventional array and PSO optimized array (Khodier and Christodoulou 2005), about 5.46 dB lower than ACO(Rajo-Iglesias and Quevedo-Teruel 2007) optimized array, and about 7.67 dB lower than CSO optimized array (Pappula and Ghosh 2014). Design example C In this design example, FPA is used to optimize the antenna element positions for SLL minimization and null placement of 32 element linear antenna array. The fitness function used by the FPA for this design example is given by (9). SLL reduction is desired in the spatial regions θ = [0°,85°] and θ = [95°,180°] whereas nulls are desired to be placed at θ = 81° and θ = 99° (very close to the first sidelobe). The array pattern is shown in Fig. 5 and the optimized positions of the antenna elements are given in Table 7. The array optimized by the proposed approach of using FPA has almost the same length as that obtained by CSO (Pappula and Ghosh 2014). It is seen from Fig. 5 that the proposed approach of using FPA enables the placement of nulls (Pappula and Ghosh 2014) places deep nulls of −80 dB as seen in Table 8. However, the proposed approach (FPA) places the deepest null of −85.27 dB. The first side lobe obtained by FPA is about 3 dB higher than that obtained using CSO (Pappula and Ghosh 2014). However, the remaining sidelobes are almost similar to those obtained by using CSO (Pappula and Ghosh 2014). Figure 6 shows the convergence of the fitness function versus the number of iterations for all the three design examples. The comparative relation based on the number of iterations taken by different optimization techniques to reach the optimal solution is depicted in Table 9. It is observed that although FPA is simpler to implement and also yields improved performance, it takes more number of iterations to converge on to the optimum solution as compared to PSO. In PSO, all the particles move through global search and end with local search in the last generation. The momentum effects on particle movement (e.g. when a particle is moving in the direction of a gradient) generally allow faster convergence. On the other hand, in FPA, global and local pollination techniques are carried out in each generation to create a balance between explorations and exploitations with the help of switching probability. Thus, the algorithm is more likely to escape locally optimal points, and yield a global optimum solution. FPA has to perform the process of global search, thus making it more computationally time consuming than PSO as depicted in Table 9. It is seen that FPA converges to the optimum solution much faster than ACO. ACO algorithm takes too long to converge, and also traps in local optima in order to find an optimal solution as there is no mechanism to control the randomness of ants. Effect of control parameters on quality of solution The control parameters of FPA have been tuned in order to achieve better quality of solution. This section presents the statistical results in terms of best, worst, mean and median fitness obtained by carrying out a detailed parametric study to tune the parameters of FPA. Effect of variation in population size (n) The final fitness values corresponding to the minimum side lobe level (design example A) and to the minimum SLL and null depth (design example B and C) with variation in population size are shown in Table 10. FPA is executed 15 times for different population sizes, keeping all other parameters constant. As the population size is increased, the fitness values converge to a minimum. However, the computational time also increases with increase in population size. It is seen from Table 10 that n = 25 is an optimum choice, as the fitness values are minimum for this case and do not show significant change on further increase in n. Effect of switching probability (p) The final fitness values corresponding to the minimum side lobe level (design example A) and to the minimum SLL and null depth (design example B and C) with variation in switching probability are shown in Table 11. FPA is executed 15 times for the different values of switching probability, keeping all other parameters constant. FPA essentially controls the degrees of explorations and exploitations with the switching probability (p). Global and local pollination techniques are used to balance explorations and exploitations. A higher value of p is more likely to explore the search space globally and escape from local minima points. It is seen from Table 11 that p = 0.8 is a good choice since it offers minimum value of fitness function. However, if p is increased the quality of solution degrades. This is because it leads to too much exploration at the cost of too little exploitation, which in turn compromises the overall search performance. Effect of β β is the index used in Levy distribution for generating Levy-flights for global pollination. The final fitness values corresponding to the minimum side lobe level (design example A) and to the minimum SLL and null depth (design example B and C) with variation in β are shown in Table 12. FPA is executed 15 times for different values of β while keeping all other parameters constant. It is seen that β = 1.5 is a good choice as it gives the lowest value of fitness function. For small β, random walks tend to get crowded around a central location, and occasionally jump quite a big step to a new location. As β increases, the probability of performing a long jump decreases. For β = 1, the Levy distribution reduces to the Cauchy distribution, and for β = 2, a Gaussian distribution is obtained. As β varies from 1 to 2, the Levy distribution varies from Gaussian to Cauchy, and the tail probabilities vary from light to heavy. This makes β = 1.5 a good choice for an intermediate Levy distribution and Levy flight.
2016-05-04T20:20:58.661Z
2016-03-10T00:00:00.000
{ "year": 2016, "sha1": "9f7873cf0027cb080d51700c08e8a0222df78ceb", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-1961-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f7873cf0027cb080d51700c08e8a0222df78ceb", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
189509894
pes2o/s2orc
v3-fos-license
Meanings, Dimensions, and Categories of Mathematics Teacher Beliefs: A Navigation through the Literature This paper aimed to discuss the meanings, dimensions, and categories of teacher beliefs about teaching and learning mathematics. I reviewed the relevant literature about teacher beliefs in general, beliefs about mathematics, and beliefs about mathematics teaching and learning in particular. Based on the review of the literature, I outlined the meanings of teacher beliefs and conceptualized three dimensions of teacher beliefs – affective dimension, cognitive dimension, and pedagogical dimension. Then, I discussed three viewpoints to observe teacher beliefs – relational, institutional, and praxis lenses. I utilized these lenses to categorize belief constructs into three classes of beliefs about mathematics, teaching mathematics, and learning mathematics. These classes’ included-instrumentalist, constructivist, and integral beliefs. I addressed the pedagogical implications of these categorical beliefs in the end. INTRODUCTION In this article, first, I present the different meanings of belief. Second, I describe three dimensions of beliefs. Third, I discuss three lenses to view teacher beliefs about mathematics and pedagogy of mathematics. Fourth, I reconceptualize teacher beliefs about mathematics and pedagogy in terms of traditional, constructivist, and integral beliefs from the literature. Finally, I conclude it with some implications of these belief categories. This paper is drawn upon my doctoral dissertation (Belbase, 2015) for the ideas discussed. Meaning of Belief There is no one commonly agreed upon definition of belief. There are diverse views on how educationists, psychologists, and philosophers define belief (Leder, Pehkonen, & Törner, 2002). Mathematics education researchers define teacher beliefs in a variety of ways (Furinghetti & Pehkonen, 2002). According to Schoenfeld (1985), mathematics-related belief systems are one's mathematical worldviews. Lester, Garofalo, & Kroll (1989) state that beliefs constitute the individual's subjective knowledge about self, mathematics, problem-solving, and the topics that deal within problem statements. Likewise, Hart (1989) argues that belief is a certain type of judgment about a set of objects. Schoenfeld (1992) further elucidates the notion of belief as an individual's understanding and feelings that may shape the ways that he or she conceptualizes and engages in mathematical behavior. Pajares (1992) Affective dimension of beliefs Affect in general means one's emotional aspect of mind. Belief means one's conception of certainty/uncertainty of something in mind. Hence, affective beliefs are associated with emotional contents in one's mind that may have different categories. Bodur, Brinberg, & Coupey (2000) outlined eight categories of affect -aroused, elated, pleased, quiet, calm, unpleasant, bored, and distressed. Each of these affective states influences what a person thinks about an object or event leading to conceiving a belief about them. Affect and belief interact with each other. They seem to be interrelated within a dynamic system of thinking and acting in a context (Pepin & Roesken-Winter, 2015). The emotional factors such as perceptions, feelings, appreciations, motivations, values, and attitudes are interrelated to beliefs (Furinghetti & Pehkonen, 2002). At the foundation level of affect, a teacher may be aware of what, how, and when to teach mathematics. The teacher may have preferences of content and related process and can initiate an action over the other. He or she may justify the reasons for the actions based on his or her beliefs. He or she may exhibit positive or negative attitudes or behavior in the classroom toward the mathematical contents and applies the pedagogy that he or she thinks right. The intention of the teacher toward such classroom actions are the results of justification of his or her thinking within his or her belief system. Such selections of activities and judgments behind them may confer his or her utmost legacy to his or her beliefs (McLeod, 1988). Therefore, an affect (feeling and emotion) may interact with one's cognition (mind and brain processes) that further shapes the person's belief in another dimension -cognitive dimension. Cognitive dimension of beliefs One's beliefs about teaching-learning mathematics seem to have an intricate connection to his or her cognition. The cognitive dimension is about knowledge, comprehension, perception, experience and conceptions through the active mental process. Thompson (1992) considered that one's beliefs are related to his or her knowledge and conceptions about the subject matter and process. Therefore, a person's beliefs are connected to his or her ability in recalling, describing, comprehending, reasoning and identifying the subject matter, pedagogy, and process in mathematics. His or her beliefs are concerned with mathematical rules, procedures and theories for analyzing, synthesizing, prioritizing, and categorizing the content, context, and process. He or she can assimilate, accommodate and adapt to them in solving the mathematical problems. These beliefs about mathematical knowledge content, process, and pedagogy either may come through an authority or they are constructed or created by an individual or group. The cognition and beliefs of an individual or a group may interact, inform, and update each other through affect (Eichler & Erens, 2015) that influence mental schema or image of an object or a process. The cognitive dimension of beliefs can play an important role in determining one's beliefs about the content and pedagogical process. The mental operation of cognition provides one a content or proposition of beliefs about something (Spezio & Adolphs, 2010) that influences action and interaction in a context shaping the relational domain, that is pedagogy. Therefore, affective and cognitive conditions of a person influence the third dimension of beliefpedagogical dimension. belief and its interrelation with affect in mathematics education. They indicate to the pressing needs of making sense of teacher beliefs concerning the students' performance and practices in mathematics education. Therefore, it appears that there is a strong correlation between beliefs and classroom practices of teachers (Bandura, 1986;Hashweh, 1996;Pajares, 1992) and this correlation is consistent in many cases (Savasci-Acikalin, 2009). However, this may not be true in other cases because of various reasons, for example, institutional goal, limitation of time, obligation to complete the course in limited time, nature of assessment, lack of administrative support for reformapproach, and parent expectations of high grades. Hence, the relational lens largely focuses on the contents of beliefs and the relationships among these contents as illustrated in Figure 1. Institutional Lens There is plenty of research on teacher beliefs about the change of their beliefs for reform-oriented practices. Some of these studies before 1990 (e.g., Grant, 1984;Nespor, 1987;Peterson, Fennema, Carpenter, & Loef, 1989;Rokeach, 1968;Stonewater & Oprea, 1988;Thompson, 1984) emphasize the description and existence of different beliefs without a focus on change of beliefs. Other studies from 1990 to 2000 (e.g., Battista, 1994;Brosnan, Edward, & Erickson, 1996;Brown & Baird, 1993;Jones, 1991;Kagan, 1992;Perry, Howard, & Tracey, 1999;Quinn, 1998a;Quinn, 1998b;Richardson, 1996;Schmidt & Kennedy, 1990;Tillema, 1995;Witherspoon & Shelton, 1991) seem to focus on the measurement of teacher beliefs using belief scales. These studies did not elaborate on the reasons for different beliefs. This issue is related to the shortage of studies on developing positive beliefs through education and development programs (Skott, 2015) thus by pointing the finger to the efforts of educational institutions to prepare teachers. Blömeke, Hsieh, Kaiser, & Schmidt (2014) testify such issues related to teacher knowledge and beliefs. They outline several factors, for example, methodological, developmental, cultural and historical, social, and economic challenges. They also point to the issues of integration of beliefs with content knowledge (CK), pedagogical knowledge (PK), and pedagogical content knowledge (PCK), among others. These studies emphasized the role of mathematics teacher education programs for a change in teacher beliefs in a positive way to change in practice in mathematics education. The institutional lens essentially focuses on the institutional roles in forming different kinds of beliefs and how the institutional transactions shape and sustain those beliefs (See Figure 1). Praxis Lens We can view mathematics teachers' beliefs through praxis lens. This lens provides us with a tool to observe teacher beliefs either as a process or product (Grundy, 1987). Belief as a product is a priori concept within a program before the actual intervention or action. As a process, it is a part of unfolding beliefs based on personal interest, the influence of the teachers and others, and the environment. The view of belief as a systemic product seems static that is dependent on the educational goals and programs, and the second view of the process is a dynamic one that keeps on changing with new experiences (Streibel, 1991). The static view aligns with Habermas's technical interest with traditional instruction for an empirical-analytic method of knowledge and the process of knowing about the world. These beliefs are guided by a set of institutional and governmental policy to create a certain type of human resource to serve the society with a technical mind, and it does not have enough room for implementing personal theories and beliefs in educational practices. The dynamic or process view embraces practical or emancipatory facets of mathematics education for social transformation. These emphases embrace the historical-hermeneutic role of knowledge construction by individuals and institutions for humanity and change (Pepin & Roesken-Winter, 2015). The emancipatory aspect emphasizes teacher beliefs, consciousness and awareness of social, political, or cultural transformations for a more equitable and just society (Streibel, 1991). Teacher beliefs can also be viewed from an epistemological perspective in terms of different philosophical paradigms (e.g., positivism, symbolism, logicism, constructivism, etc.) and their corresponding practical implications (Xenofontos, 2018). The praxis lens mostly focuses on the theoretical and philosophical canons of teacher beliefs and how these beliefs interact or inform practice (See Figure 1). The lenses discussed above helped me in categorizing teacher beliefs about mathematics, teaching mathematics and learning mathematics. I used the relational lens to observe teacher belief contents about mathematics, mathematics teaching and learning and relation among these contents to idealize the categories of beliefs in terms of traditional, constructivist, and integral belief going beyond the classical framework of the toolbox, formalism, and problem-solving suggested by Liljedahl, Rolka, & Rosken (2007). Then, I used the institutional lens to view institutional roles and transactions to observe whether idealized belief categories relate to the particular domain of content or process. While doing this, I examined student-teacher, student-student and studentcontent interaction and classroom environment. Finally, I applied the praxis lens to interpret the idealized domain of categories from theoretical and philosophical views and everyday practices with examples from the literature. Hence, I used these lenses to identify and conceptualize three specific categories of mathematics teacher beliefs about mathematics, teaching mathematics and learning mathematics from the past studies that are discussed in the result section. RESULTS AND DISCUSSION The result of the extensive review of the literature on mathematics teacher beliefs has been presented in three domains -beliefs about mathematics, beliefs about mathematics teaching, and beliefs about mathematics learning. Each of these domains has three conceptual categories of beliefs in terms of traditional, constructivist, and integral beliefs that came up from the analysis of teacher beliefs from the literature using three lenses discussed above. Beliefs about Mathematics Many studies in the past discussed teacher beliefs about mathematics. Those studies explored teacher beliefs about mathematics based on nature, utility or function, relationship with other disciplines, and methods. Aguirre (2009) described teacher beliefs about mathematics based on content domains such as algebra, geometry, calculus, and statistics and their relative degree of abstractness. The abstract nature of algebra and calculus are related to negative beliefs, and other relatively lesser abstract areas of mathematics are related to positive beliefs. These beliefs are also influenced by the perceived usefulness of the subject. For example, many teachers see algebra as a less valuable subject because it is not directly applicable in daily life problem-solving. According to Dionne (1984), mathematics teachers' beliefs can be categorized as traditional, formalist, and constructivist depending on ontological and epistemological characteristics. Later, researchers added the fourth view, as integral beliefs about mathematics. The traditional belief about mathematics considers that it is an objective and absolute knowledge that is independent of human experience and cognition. Mathematical knowledge is independent of the knower. This kind of beliefs originated from Platonism in the philosophy of mathematics. The formalist beliefs are associated with the nature of mathematics as a formal, axiomatic, and rigorous body of knowledge with logical proofs and structures (Eichler & Erens, 2015;Ernest, 1991). Another belief related to constructivism contemplates mathematics as a corrigible, changeable, and challengeable body of knowledge through human (individual or social) construction (Ernest, 1991;Prawat, 1992). Constructivists admit that knowledge of mathematics resides in mind, but not in reality out of it. For them, such knowledge does not have an existence out of human cognition, perception and experience (Ernest, 1991;von Glasersfeld, 1989). The integral beliefs about mathematics bridge all the three beliefs together in a (w)holistic way to observe teacher beliefs as an interrelated construct of different beliefs which cannot be strictly isolated as this or that kind. It is more related to cultural-historical-political agenda of mathematics. Therefore, teacher beliefs about mathematics can be discussed at three levels -the traditional, constructivist, and integral level as presented in Table 1. I discuss each of these belief-levels under separate sub-sections. Traditional Mathematics is absolute, objective, formal, axiomatic, structured, and independent of the human cognition; it is a collection of rules and procedures, and it is a tool to solve problems; mathematics knowledge is fixed. Dionne (1984), Törner (1998) Constructivist Mathematics is relative, less formal, a creation, practical, subjective, contextual, and science of every person; and has the cognitively challenging task. Ernest (1989), Thompson (1992), Törner (1998) Mathematics is an integrated product of social, historical, political and cultural practices by integrating both formal and informal approaches. Ernest (2015). Traditional beliefs about mathematics Mathematics education researchers outlined mathematics teachers' traditional belief about mathematics (Dionne, 1984;Furinghetti & Morselli, 2009;Furinghetti & Morselli, 2011;Handal, 2003;Törner, 1998). Teachers with traditional beliefs may consider mathematics as an objective knowledge that is external to human cognition (Ernest, 1991). Such beliefs seem to align with the Platonist view (Linnebo, 2009). The teachers with traditional beliefs consider that mathematics is an abstract knowledge that is independent of the knower. Their view appears to be aligned with positivist and a realist stance (Tracey, Perry & Howard, 1998). They consider mathematics as an exact science (Felbrich, Kaiser, & Schmotz, 2014). Such beliefs may harm the innovative curriculum practice that is reform-oriented and research-based teaching and learning (Handal, 2003). These teachers view mathematics as universal rules and facts and the science of elites (Shahvarani & Savizi, 2007). These beliefs are associated with rules, exact formulas and theories for memorization (Martino & Zan, 2011). The traditional belief about the nature of mathematics aligns with instrumentalist views that consider mathematics as a collection of rules, facts, and skills (Eichler & Erens, 2015). The teachers emphasize the justification of mathematical knowledge from the external authorities (Ernest, 1991). They may consider mathematics as an empirical science with objectivity without a role for one's subjectivity (Ernest, 1991). They may also consider mathematical proofs as a part of didactic practice (Furinghetti & Morselli, 2009). They may conceive such belief systems from their experience as mathematics students (Skott, 2015). These beliefs emphasize mathematics as a domain of didactic knowledge to be learned rather than constructed by students. Constructivist beliefs about mathematics For constructivist teachers, mathematics knowledge is both technical and practical with subjectivity and contextuality, and it is a science of every person (Shahvarani & Savizi, 2007). According to Ernest (1989), such beliefs align with problem-solving views considering mathematics as a dynamic subject which is continuously expanding as an invention and a cultural product. In this sense, mathematics is a body of knowledge with rules, axioms, facts, concepts, ideas, and theories that are contextual, social and cultural (Zakaria & Musiran, 2010). Within this system of beliefs, mathematics is considered as "problem-solving process, a discovery of the structure and regularities" (Felbrich et al., 2014). For constructivists, the mathematical objects are created by mathematicians and practitioners of mathematics (Dionne, 1984). Such a view seems to be aligned with the process view which considers mathematics as a process of reasoning (Törner, 1998). For them, mathematics is an activity with conjectures, proofs, refutations, and contradictions (Thompson, 1992). Hence, all constructivists seem to believe that knowledge of mathematics is not absolute and universal, but it is "fallible, corrigible, tentative, and evolving" (Ernest, 1991). This view further supports Polya's idea of "mathematics in the making" (Polya, 1957). Integral beliefs about mathematics Some teachers may have beliefs about mathematics such that there is no other mathematical truth except what we construct from social and cultural context (Ernest, 1991). Such beliefs range beyond the dualistic view of traditional and constructivist (Dede & Uysal, 2012). It may consider the system view that is much broader than the toolbox view. According to this view, mathematics is the logical study of axioms, theorems, and proofs to solve problems (Törner, 1998). Within this view, mathematics is a science to model and solve problems in society (Felbrich et al., 2014). Some teachers may describe their beliefs about mathematics either very negatively or positively, depending on their experience with mathematics and its applications. For example, some view that "one can learn mathematics only at school, mathematics is difficult, mathematics is abstract, and it has no connection with everyday life…" (Perkkilä, 2003). These negative beliefs may connote traditional or instrumental beliefs, but not all traditional beliefs are negative. Hersh (1979) assumes that mathematics as a product of sociocultural and historical actions and efforts to help us understand the nature of the problems and solve them. Within this view, we may consider mathematics as a mental tool deeply rooted in the social, cultural, and historical origin of development and practice in the human civilizations. It is also considered as "human mathematical activity that produces mathematics" (Boyd & Ash, 2018) that promotes creativity and thinking rather than just a linear process of problem-solving. The literature on mathematics teacher beliefs does not explicitly explain integral beliefs about mathematics though several aspects of it have been outlined together with other beliefs. The literature on teacher beliefs about mathematics mostly focused on the nature and functions of mathematics perceived by teachers as a basis to discuss their beliefs. They did not discuss explicitly how these beliefs about mathematics are related to the origin, subtleties in development and dissemination of mathematics impacting teacher beliefs. What is mathematics? How does it originate as a domain of knowledge? How does mathematics go through developmental phases of origination, modification, communication, and reorganization? There is a large gap in the literature to address these questions of beliefs about mathematics. Beliefs about Mathematics Teaching There are contradictory views on teachers' beliefs about teaching mathematics. For example, some researchers (e.g., Kuhs & Ball, 1986) classified teachers' beliefs about teaching in terms of what is focused during the teaching process. Some categorized these beliefs as--learner-focused, content-focused with conceptual understanding, content-focused with performance, and classroom-focused. The learner-focused beliefs emphasize engaging students in the construction of the meaning of what students learn. The emphasis on content and with performance stresses on mastery of rules and procedures. The view with content-focused with conceptual understanding emphasizes understanding of meanings. The fourth view with classroom-focused is a holistic approach to focus on classroom dynamics as a community of practice in mathematics. van Zoest, Jones, & Thornton (1994) proposed a framework emphasizing three components-learner focused interaction in the classroom, focus on conceptual understanding, and student performance. This framework is similar to the earlier one suggested by Kuhs & Ball (1986). Table 2 highlights these beliefs in terms of traditional, constructivist, and integral belief profiles. Constructivist Mathematics teaching means helping students construct meanings; playing a multidimensional role as mentorfacilitator-teacher; encourage students to act as mathematicians; argue on the mathematical theories; develop conceptual reasoning, and teaching to be reform-oriented. Anderson (1996), Day (1996), Perkkilä (2003) Traditional beliefs about mathematics teaching The traditional belief adopts teaching as diffusion of mathematical knowledge from a teacher to students (Ernest, 1991). This viewpoint is an instrumentalist that emphasizes the teaching of formulas, facts, skills, and procedures (Dede & Uysal, 2012). Teaching is mostly teacher-centered with drills, lectures, repeated practices, and teacher demonstrations. The teacher is the authority of knowledge as a mathematician. He or she passes decrees of formal mathematics full of procedures, rules, and formulas, emphasizing accuracy, speed, and memorization. He or she underlines mathematical contents concentrating on students' accuracy of performance and outcomes. Teachers with this kind of beliefs mostly focus on classroom activities heavily driven by contents emphasizing accurate performance (Kuhs & Ball, 1986). There is nothing in between right and wrong mathematics. For a traditional teacher, his or her role is a trainer, and the students are trainees as passive receivers of the mathematical knowledge (Ernest, 1989). A teacher with such beliefs may emphasize precise solutions demonstrating appropriate skills in solving mathematical problems. He or she highlights techniques and rules rather than mental processes. This kind of belief system can be reluctant for reforming curriculum and practice (Perkkilä, 2003). A teacher with such a belief system might be suffering from prior experience of mathematics. His or her experience and performance may be poor due to lack of understanding mathematics in the class and teacher being an authoritative figure. These teachers focus more on rote learning of rules and formulas with one correct solution to the problem rather than a discovery approach of students to construct their mathematics. They believe that textbooks are the sole resource of mathematics knowledge for teaching mathematics (Perkkilä, 2003). For them, producing the right answer in a problem-solving is more important than subjective thinking of students. They believe in instructing students with formal methods or procedures of mathematics (van Zoest et al., 1994). The teachers demonstrate the method of solving mathematics problems through the routine process and students follow their steps leading to mastery approach (Boyd & Ash, 2018). Constructivist beliefs about mathematics teaching The constructivist teachers, in general, accept the students at the center of the teaching-learning. They emphasize student-centered teaching with reasoning, creative thinking, and problem-solving. According to their views, teaching embraces students "constructing their meaning as they confront with learning experiences which build on and challenge existing knowledge" (Dede & Uysal, 2012). These teachers stress on students' understanding of the meaning and constructing their knowledge of mathematics (Kuhs & Ball, 1986). They believe that the teacher's role is largely to be a facilitator (Ernest, 1989) and students' role is co-construction of mathematical knowledge (Zakaria & Musiran, 2010) as mathematicians. Some teachers emphasize cooperative, collaborative, and shared activities in the classroom (Perkkilä, 2003). The cooperative activities in mathematics class can help students in learning from each other and helping each other to learn better (Ernest, 1991). These activities accentuate teaching for conceptual and procedural understanding, the teaching of problem-solving in context, using hands-on and technological manipulatives, and helping students produce their solutions with their logic and reasoning. For the constructivist teachers, teaching is a creative-imaginative function that helps students learn mathematics by constructing their mathematical ideas. Some mathematics teachers may be slow in espousing and instigating the constructivist teaching due to their background and prior learning experiences. The constructivist teachers support problem-solving phases in teaching that include stating the problem, clarifying the variables, exploring the different possible solutions, the phase of relief from the dead end, and presenting one's solutions, and interpreting the solutions. These phases align with constructivist teaching with a statement of problems, identification of alternate solutions, avoidance of the dead end, and presentation of the solutions (van Zoest et al., 1994). Integral beliefs about mathematics teaching A teacher may negotiate with norms and values while developing a learning environment for students (Perry et al., 1999). Sometimes, one's beliefs do not clarify whether they are precisely traditional or constructivist; rather they extend toward both directions integrating the good aspects of either of the belief paradigms. Therefore, teaching mathematics can be viewed from an integral approach beyond the traditionalconstructivist separation as a methodological border-crossing (Giroux, 1992;Silver, 2003). This view focuses on the devolution of the disciplinary and interdisciplinary border of mathematics. The idea of border-crossing is a revelation of an integrated approach to teaching mathematics. The border-crossing is beyond traditionalconstructivist dualities of mathematics teaching, and it gears one's actions toward critical and postmodern 'deconstruction of current mathematics teaching' (Nkhwalume, 2013). The postmodern view of teaching can transcend further with self-reflexivity (Cain, 2011) of a teacher on the relationship of content and pedagogy with self through an introspection, retrospection, prospection, and idiosyncratic construction of mathematical meaning (Belbase, 2013b). This view decenters mathematics teaching with opportunities for planning students' active engagement and construction of mathematics (Goss, Powers, & Hauk, 2013). In such cases, the teaching of mathematics may not have explicit boundaries to state whether it is traditional or constructivist (Smitherman, 2006). Therefore, it is a contextual and provisional process requiring an adjustment in the classroom based on cultural-historical context (Roth & Lee, 2007). Teachers may defy the ability groups and apply mixed groups so that integration of low performers and slow learners benefit from collaboration with high ability students in a variety of ways (Boyd & Ash, 2018). Such teachers believe on the integration of social and cultural context while teaching mathematics (Purnomo et al., 2016). The discussion on mathematics teacher beliefs about mathematics teaching should highlight 'What constitutes teaching?' as an important question to consider for analysis of belief categories. From the viewpoint of the institutional transaction, it is a process by which teachers help students to learn mathematics in the schools. Then, a question arises-What are the elements of these institutional transactions taking place within the schools in which teachers are engaged in so-called action of teaching? Do their expressed views reflect their beliefs about mathematics teaching? Do their actions in the classroom or elsewhere in schools exhibit their beliefs about mathematics teaching? Are these expressions consistent with their beliefs? Are their actions consistent with their beliefs? Are their beliefs consistent with what they exhibit through expressions or actions? Teaching mathematics is a very dynamic complex process that is, metaphorically, like a wind that does not have a fixed direction, origin, and also uniform impact. The literature on teacher beliefs about teaching mathematics has not yet fully explored the subtle nature of beliefs system besides categorizing them with specific signposts as -instrumental, reform-oriented, integrated, etc. Although, there is plenty of literature on mathematics teacher beliefs about teaching mathematics, there is still scope of further studies to develop a deeper understanding of such beliefs with new categories. Beliefs about Mathematics Learning Mathematics teachers may have different beliefs about mathematics learning. Some researchers and scholars (e.g., Fisher, 1992) related beliefs about learning mathematics in terms of knowing mathematics contents and procedures. Learning of mathematics is related to mathematical cognition with mental operations. The cognitive process includes reception of information, assimilation of received information, adoption of the information in a context, adaptation of the knowledge into changed context, construction of meaning and interpretation of what has been learned, evaluation of the knowledge, and extension into other areas of problems. Skemp (1971) and Skemp (1978) proposed mathematics learning either as relational and instrumental function. Some teachers may believe instrumental learning which focuses on traditional approaches in the classroom with rote learning, drill-and-practice, memorization of rules, and repeated practice of problem-solving (Ernest, 1991). The instrumental learning highlights the use of formal rules, symbols, procedures, and formulas without adaptation (Idris, 2006). The teachers who believe relational learning may emphasize contextual learning by the construction of meaning and concepts by the students (Kim & Albert, 2015). They may construct a scheme (mental structure), and they may use the scheme to transfer concepts across the contents (Skott, 2015). The prior schema may form a network of new schemas with adaptation and transformation of knowledge (Idris, 2006). Some teachers consider that learning mathematics is an active process of construction of meaning by students. Others consider that learning mathematics is guided by teacher motivation, direction and instruction (Wang & Hsieh, 2014). The former view about the construction of meaning is known as active learning in which students design their approaches to solve mathematical problems. The latter view, a form of passive learning, assumes that "students learn mathematics through following explanations, rules, and procedures transmitted by the teachers" (Wang & Hsieh, 2014). Therefore, there are conflicting views about learning mathematics--some of which are close to traditional and others are near constructivist or integral beliefs. Table 3 summarizes teacher beliefs about mathematics learning in terms of three categories-traditional, constructivist, and integral beliefs. Mathematics learning is memorizing rules, formulas, procedures, and facts; these rules, formulas, and facts are transmitted from the authority (i.e., a teacher) into the minds of students; and teaching mathematics is preaching, and learning is assimilating what is preached. Schwier & Misanchuk (1993), Dengate & Lerman (1995), Ernest (1995), Dunn (2002), Perkkilä (2003), Zakaria & Musiran (2010). Constructivist Mathematics learning is a process of constructing meaning; mathematical concepts, procedures, and theories are constructed by students through the individual and social process; and learning mathematics is either intuitive or mediated through interaction; students connect their prior experience to new learning of mathematics. Steffe & Kieren (1994), Dengate & Lerman (1995), Ernest (1995), Furinghetti & Morselli (2009), Lo & Anderson (2010), Purnomo, et al. (2016. Traditional beliefs about mathematics learning Some mathematics teachers have traditional beliefs about learning mathematics. Their beliefs may support the exogenic philosophy and behaviorist theories of learning. Those who believe that knowledge should reflect the external reality consider that learning is reflecting the real world with reproduction of what has been learned from experience (senses) (Hermans, 2002). According to this view, the teacher is the authority of mathematics knowledge who transmits facts, rules, and procedures into the minds of students (Dengate & Lerman, 1995). Those teachers consider that learning is memorizing facts, rules, and formulas (Ernest, 1991). Their metaphor of mind is a 'tabula rasa' (a blank slate), and the world is the absolute reality (Ernest, 1995). However, the world may be the absolute Newtonian world (determinism), or it is a social and cultural world (human agency). Those mathematics teachers consider that learning is a passive reception of knowledge from the external authorities (e.g., teachers) without being sceptical of what students learn and how they learn. Many mathematics teachers, still today, embrace this type of belief. According to Zakaria & Musiran (2010) and Perkkilä (2003), a majority of teachers (in their study) believed learning of mathematics as memorizing rules, procedures, and formulas. Such beliefs focus on mastering procedural skills (Ernest, 1989). Therefore, still these days many mathematics teachers believe these models of passive reception, submissive and compliant learning. Constructivist beliefs about mathematics learning Some mathematics teachers believe that learning is the construction of meaning by the learners. Their beliefs about mathematics learning align with the endogenic philosophy and constructivist theories of learning. Their viewpoints about learning mathematics are inclusive in the sense of adopting and adapting to the culturalhistorical activities in day-to-day life (Dengate & Lerman, 1995;Roth & Lee, 2007;Steffe & Kieren, 1994). They consider that the mind is an active site of constructing mathematical knowledge and the world of knowledge represents inner cognitive, intuitive, and the experiential world (Ernest, 1995). Many mathematics teachers hold this belief about learning mathematics. Lo and Anderson (2010) stated that many preservice teachers believed learning mathematics by creating a challenging and supportive environment to build upon students' experiences. Mathematics learning can be either an individual or a social process of conceiving concepts, meanings, and procedures. Students construct their meaning of mathematical knowledge through selfreflection and critical thinking. They may work collaboratively to teach and learn from each other (Brodie, 2010). According to this view, mathematics learning is an inductive process with cases, examples, and problems shifting the goal to the broad-spectrum of understanding the phenomena. Ernest (1989) argued constructivist learning as an active construction of meaning, exploration of mathematical ideas, and learner autonomy. It is a process of transitioning from simple to complex construction of mathematical concepts and ideas with meanings (Furinghetti & Morselli, 2009;Furinghetti & Morselli, 2011). While doing this, students work on their problems and question themselves -is it right or wrong and collaboratively check each other's work without going to the teacher (Boyd & Ash, 2018). Such teacher believes that students connect their prior experience to the new concepts of mathematics they learn in the classroom (Purnomo et al., 2016). Integral beliefs about mathematics learning Some mathematics teachers consider that the teachers can help all students learn mathematics by creating a learning environment for everyone who wishes to learn (Leatham, 2002). The postmodern view of learning considers that learning is an active engagement in the reflexive thinking, reasoning and problem-solving. Such process encompasses retrospective, intuitive, prospective, and idiosyncratic thinking and reasoning about mathematical problems (Belbase, 2013a;Nagata, 2004). Teachers' selfreflexive thinking and acting in the teaching process help students in problem-solving by integrating knowledge across the disciplines or the content areas in the same discipline. Learning mathematics is an integral process of accommodating a variety of cognitive, affective, social, cultural, and historical resources available. Therefore, such teachers may encourage students to learn from intricacies and contexts by enhancing their potential and developing them as self-learners (Steffe & Gale, 1995) considering themselves as agents of social and cultural transformation with a resilience (Taylor, Taylor, Karnovsky, & Taylor, 2017). While doing this, students participate in collaborative learning by embracing "struggle and mistakes" (Boyd & Ash, 2018). When the students struggle through the problems and resolve their mistakes, they not only try it again but also apply different methods or procedure to solve the problem either independently or with peer collaboration. Learning mathematics is related to the ability to recall, define, explain, compare, apply, comprehend, conjecture, refute, conceptualize, synthesize, and construct mathematical objects in a context. There can make a list of many other action-verbs related to the learning of mathematics. However, teacher beliefs about mathematics learning in terms of traditional, constructivist, and integral beliefs may not integrate all of these aspects. Most of the literature on teacher beliefs about learning mathematics focused on conceptual and procedural aspects of problem-solving with manipulation, representation, construction, justification, simplification and extension of mathematical objects. Very few works of literature are concerned with teacher beliefs about learning mathematics in terms of neurophysiological, psychological, philosophical, social, political, institutional, and individual factors that constitute the meaning of learning mathematics. A plethora of literature on teacher beliefs about learning mathematics deal with action related belief constructs rather than metacognitive and reflexive thinking of students that have long term impacts on their ability to develop own mathematics and related concepts, models, and theories. CONCLUSION Mathematics teacher beliefs may have a significant implication in the quality of teaching and learning in the classroom. Teacher beliefs are the principal factors to influence instructional activities in the classroom and subsequent student learning (Skott, 2015). Many researchers agree that teacher beliefs may affect their classroom practices and hence developing positive beliefs is essential for changing their teaching practice (Stipek, Givvin, Salmon, & MacGyvers, 2001). Likewise, other scholars and researchers (e.g., Fives & Buehl, 2012;Schoenfeld, 1992) emphasized teacher beliefs about the subject matter, teaching and learning process, and students as significant determinants of the classroom process. Therefore, one of the goals of teacher development and education is associated with forming positive beliefs. This goal is possible to achieve with advanced mathematical content and pedagogical knowledge including social, cognitive, and affective components (Schoenfeld, 2010). Therefore, mathematics teacher education should aim to form and change teacher beliefs for a change in practice (Richardson, 2003). Mathematics education researchers (e.g., Peterson et al., 1989;Stipek et al., 2001;Thompson, 1992) emphasized forming and changing such beliefs for a change in teaching and learning of mathematics. Understanding of teachers existing beliefs helps teacher educators to plan and implement professional development activities that support reform-oriented teachinglearning practices with a flexibility to adopt new knowledge, skills and practices transforming their instructional beliefs (Spillane et al., 2017). In this context, mathematics teacher education can influence teachers' beliefs in a positive way for improved practice (Fenstermacher, 1979;Green, 1971). Hence, one of the goals of current mathematics teacher education is to transform beliefs about teaching and learning (Fenstermacher, 1979). Then, a question comes: How to change teacher beliefs? This question points to the methodological issues of how to form or change their beliefs and mechanism for change of beliefs. The process of forming positive beliefs about mathematics and teaching-learning mathematics is related to mechanisms for forming and changing their beliefs. These processes are linked with broader epistemic factors associated with teacher beliefs. The review of studies shows the possibility of different models for forming and changing teacher beliefs. These models can be helpful in the epistemic change in teacher education leading to a focus on shaping constructivist and integral beliefs (Alexander & Sinatra, 2007;Sinatra, 2005). The process of forming or changing beliefs may affect one's epistemology and methodology as well (Chandler, Boyes, & Ball, 1990) by formulating and implementing new strategies (Schommer et al., 1992). The process of shaping beliefs for reformoriented teaching and learning of mathematics is also related to conceptual change (Qian & Alvermann, 1995), one's cognitive ability (Kardash & Howell, 2000), moral reasoning (Bendixen, Schraw, & Dunkle, 1998), and overall academic performance (Cano & Cardelle-Elawar, 2008). Many teacher education programs focus on teacher beliefs as part of their interventions to impart positive beliefs for meaningful actions in the classroom (Part, 2009). The results of international studies (e.g., TIMSS and PISA) may provide us with a motivation to develop or reform mathematics teacher education to change or shape teacher beliefs for more meaningful practices in the classrooms (Part, 2009). However, the literature on teacher beliefs about mathematics, teaching mathematics and learning mathematics focused largely on the content and object of beliefs and less on the context leaving a space for further research and development on this complex issue.
2019-06-13T13:23:43.423Z
2019-03-06T00:00:00.000
{ "year": 2019, "sha1": "fbc01166b7036334b5d61f847d1b4b97d1bc77d3", "oa_license": "CCBY", "oa_url": "http://journal.uad.ac.id/index.php/IJEME/article/download/11494/pdf_37", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "83993775b6f4bbe2b64488cc17fa95d22aded8e1", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
252185162
pes2o/s2orc
v3-fos-license
Accumulation of elastic strain toward crustal fracture in magnetized neutron stars This study investigates elastic deformation driven by the Hall drift in a magnetized neutron-star crust. Although the dynamic equilibrium initially holds without elastic displacement, the magnetic-field evolution changes the Lorentz force over a secular timescale, which inevitably causes the elastic deformation to settle in a new force balance. Accordingly, elastic energy is accumulated, and the crust is eventually fractured beyond a particular threshold. We assume that the magnetic field is axially symmetric, and we explicitly calculate the breakup time, maximum elastic energy stored in the crust, and spatial shear-stress distribution. For the barotropic equilibrium of a poloidal dipole field expelled from the interior core without a toroidal field, the breakup time corresponds to a few years for the magnetars with a magnetic field strength of $\sim 10^{15}$G; however, it exceeds 1 Myr for normal radio pulsars. The elastic energy stored in the crust before the fracture ranges from $10^{41}$ to $10^{45}$ erg, depending on the spatial-energy distribution. Generally, a large amount of energy is deposited in a deep crust. The energy released at fracture is typically $\sim 10^{41}$ erg when the rearrangement of elastic displacements occurs only in the fragile shallow crust. The amount of energy is comparable to the outburst energy on the magnetars. INTRODUCTION Neutron-star crust is considered a key aspect for understanding several astrophysical phenomena. The solid layer near the stellar surface can support non-spherical deformations, called mountains, with a height of less than 1 cm. Such asymmetries on a spinning star cause the continuous emission of gravitational waves. Therefore, calculation of the maximum size of these mountains is key to the detection of gravitational waves and has been discussed in several theoretical studies (Ushomirsky et al. 2000;Payne & Melatos 2004;Haskell et al. 2006;Gittins et al. 2021). Gravitational-wave observation provides valuable information (Abbott et al. 2021a(Abbott et al. ,b, 2022, for recent upper limit); thus, by continuously improving the sensitivity of the LIGO-Virgo-KAGRA detectors, the physics relevant to the phenomenon may be explored. Pulsar glitches are sudden spin-up events that are observed in radio pulsars (Espinoza et al. 2011;Basu et al. 2022, for glitch catalogue). Similar spin-up and peculiar spin-down events are observed in anomalous X-ray pulsars (Dib et al. 2008;Kaspi & Beloborodov 2017). A sudden spin-up in a radio pulsar is produced by the transfer of angular momentum from the superfluid components of the core to the normal crust (Anderson & Itoh 1975;Alpar et al. 1984). Crust quakes were also discussed in other models (Franco et al. 2000;Giliberti et al. 2020;Rencoret et al. 2021, for recent studies). The elastic deformation is caused by a decrease in centrifugal force, owing to a secular spin-down, and the crust eventually fractures when the strain exceeds a critical threshold. However, this simple model does not explain the observation; the loading of the solid crust between glitches is too insignificant to trigger a quake. Giant flares in magnetars are rare, albeit highly energetic. They are typically ∼ 10 44 − 10 46 erg released within a second (Turolla et al. 2015;Kaspi & Beloborodov 2017;Esposito et al. 2021, for a review). Quasi-periodic oscillations (QPOs) with some discrete frequencies in the range of 20 Hz -2 kHz were observed in the tails of these flares. Per the order-of-magnitude estimate, these frequencies correspond to the torsional shear or Alfvén modes with magnetic field strength ∼ 10 15 G. Outbursts, which are less energetic, are also observed in magnetars. These activities are considered to be powered by internal strong magnetic fields of ∼ 10 15 G. The crustal fracture of the magnetar is proposed as a model for fast radio bursts (FRBs) (Suvorov & Kokkotas 2019;Wadiasingh & Chirenti 2020), and it may be supported by QPOs (Li et al. 2022) in the radio burst from SGR J1935+2154 in the galaxy (Mereghetti et al. 2020;CHIME/FRB Collaboration et al. 2020;Bochenek et al. 2020). Most FRBs are located at a cosmological distance, and further observation sheds light on whether FRBs originate from magnetars or a subclass. A recent observation of the magnetar SGR 1830-0645 revealed pulse-peak migration during the first 37 days of outburst decay (Younes et al. 2022). This provides important information concerning the crust motion coupled with the exterior magnetosphere. Most theoretical studies have been focused on the crustal-deformation limit. Elastic stresses gradually accumulate until a particular threshold. Beyond this threshold, the elastic behavior of the lattice abruptly ceases, and the transition is exhibited as a star-quake or burst. An evolutionary calculation of the deformation is necessary to understand the related astrophysical phenomena. In this study, we consider the crust in a magnetized neutron star. The static magneto-elastic force balance was studied for various magnetic-field configurations (Kojima et al. 2021a(Kojima et al. , 2022. A variety of magneto-elastic equilibria was demonstrated, and is considerably different from the barotropic equilibrium without a solid crust. Herein, we explore the accumulation of shear stress induced by the Hall evolution, which is an important process in the strong field-strength regime. Suppose that the magneto-hydro-dynamical (MHD) equilibrium in the crust holds at a particular time without the elastic force. The equilibrium is not that for electrons (Gourgouliatos et al. 2013;Gourgouliatos & Cumming 2014); thus, the magnetic field tends to the Hall equilibrium in a secular timescale. According to the magnetic-field evolution, the Lorentz force also changes. The deviation is assumed to be balanced with the elastic force. Thus, the shear stress in the crust gradually accumulates and reaches a critical limit. We cannot follow the post-failure evolution because some uncertainties are involved in the discontinuous transition. Therefore, our study provides the recurrent time and magnitude of the bursts. The models and equations used in the study are discussed in Section 2. For MHD equilibrium in a barotropic star, the evolution of the magnetic field is driven by a spatial gradient of electron density. In Section 3, the critical configuration at the elastic limit is evaluated, and the accumulating elastic energy is calculated. In Section 4, we also consider non-barotropic effects using simple models. The non-barotropicity results in another driving process of the magnetic-field evolution, and consequently elastic deformation. The numerical results of these models are given. Finally, our conclusions are presented in Section 5. Magnetic Equilibrium We consider the dynamical force balance between pressure, gravity, and the Lorentz force. The MHD equilibrium is described as follows: where Φ g is the gravitational potential including the centrifugal terms. The third term has a magnitude ∼ 10 −7 (B/10 14 G) 2 times smaller than those of the first and second terms. Deviation owing to the Lorentz force is small enough to be treated as a perturbation on a background equilibrium. We limit our consideration to an axially symmetric configuration for the magnetic-field configuration. The poloidal and toroidal components of the magnetic field are expressed by two functions Ψ and S, respectively, as follows: where ̟ = r sin θ is the cylindrical radius, and e φ is the azimuthal unit-vector in (r, θ, φ) coordinates. When the equilibrium is barotropic, i.e., the constant surfaces of ρ and P are parallel, the azimuthal current j φ is described in the form where the current function S should be a function of Ψ, and K is another function of Ψ. For the axially symmetric barotropic equilibrium, acceleration due to the Lorentz force, which is abbreviated as a ≡ (cρ) −1 j × B, is given by The force balance (1) is described by gradient terms of scalar functions. The magnetic function Ψ is obtained by solving the Ampére-Biot-Savart law with the source term (Equation (3)) after the functional forms S(Ψ) and K(Ψ) are specified. For simplicity, we assume that K is a linear function of Ψ, K = K 0 Ψ, and S = 0 in Equation (3). The poloidal magnetic field is purely dipolar and is given by Ψ = g(r) sin 2 θ. The radial function g is solved with appropriate boundary conditions: in a vacuum at the surface R, and g = 0 at the core-crust interface r c . The latter refers to the magnetic field expelled from the core. For the case where the field penetrates into the core, g smoothly connects to the interior solution at r c . We normalize the radial function by the dipole field strength B 0 at the surface, g(R) = B 0 R 2 /2. The magnetic geometry discussed above is a simple initial model to examine the magnetic field evolution. However, purely poloidal magnetic field configurations are unstable according to an energy principle (Tayler 1973;Markey & Tayler 1973;Wright 1973) and numerical MHD simulation (Braithwaite & Nordlund 2006;Braithwaite 2009;Lander & Jones 2011a,b;Mitchell et al. 2015). Dynamical simulations revealed that the final state after a few Alfvén-wave crossing times is a twisted-torus configuration, in which the poloidal and toroidal components of comparable field strengths are tangled. Moreover, recent three-dimensional simulation shows very asymmetric equilibrium (Becerra et al. 2022). These studies are concerned with the configuration in an entire star. The information relevant to our study corresponds to the magnetic field in a thin outer layer; therefore, the present understanding is quite incomplete. For example, the ratio of toroidal to poloidal components decreases near the surface, because the toroidal field should vanish outside the exterior. However, the ratio in the crust located near the surface is uncertain, almost zero or in the order of unity, although both components are comparable in magnitude inside the star. Initially, a simple magnetic field configuration is used in this paper, but it is necessary to improve the configuration. We discuss the non-barotropic equilibrium of the magnetic field. From Equation (1), the acceleration owing to the Lorentz force satisfies The acceleration a is no longer described by the gradient of a scalar, but may be generalized as where α and β are functions of r and θ, and ∇ × a = ∇α × ∇β = 0 is assumed. Owing to almost arbitrary functions α and β, the constraint on the electric current, and hence to the magnetic-field configuration, is relaxed in the non-barotropic case. Non-barotropicity has been studied in magnetic deformation (Mastrano et al. 2011(Mastrano et al. , 2013(Mastrano et al. , 2015. Barotropic (Haskell et al. 2008;Kojima et al. 2021b) and non-barotropic models are significantly different. (Mastrano et al. 2011(Mastrano et al. , 2013(Mastrano et al. , 2015 up to approximately one order of magnitude in the resulting ellipticity. The effect is important; however, the treatment remains unclear. Therefore, we introduce the models of ∇ × a to study nonbarotropicity in Section 4. Magnetic-field Evolution Our consideration is limited to the inner crust of a neutron star, where the mass density ranges from ρ c = 1.4 × 10 14 g cm −3 at the core-crust boundary r c to the neutron-drip density ρ 1 = 4 × 10 11 g cm −3 at R = 12 km. We ignore the outer crust compared to "ocean" and treat the exterior region of r > R as the vacuum. The crust thickness ∆r ≡ R − r c is assumed to be ∆r/R = 0.05, ∆r = 0.6 km. The Lorentz force j × B due to the magnetic-field evolution is not fixed in a secular timescale. The evolution in the crust is governed by the induction equation where n e is the electron-number density, and σ is the electric conductivity. The first term in Equation (7) represents the Hall drift, and the Hall timescale τ H is estimated as follows: where the electron-number density n ec at the core-crust boundary, crust thickness ∆r, and dipole field strength B 0 at the surface are used. This timescale is shorter than that of the Ohmic decay, which is the second term in Equation (7) in the strong-magnetic-field regime. The Hall-Ohmic evolution was numerically simulated in the Hall timescale (Pons & Geppert 2007;Kojima & Kisaka 2012;Viganò et al. 2013;Gourgouliatos et al. 2013;Gourgouliatos & Cumming 2014;Viganò et al. 2021) for axially symmetric models. Recently, the calculation has been extended to 3D models (Wood & Hollerbach 2015;Viganò et al. 2019;De Grandis et al. 2020;Gourgouliatos et al. 2020;Igoshev et al. 2021), revealing some of the effects ignored in the 2D models. Here, our calculation is limited to the early phase of the evolution in a simpler axial-symmetric model. We consider only the Hall drift term in Equation (7) and rewrite the equation as follows: where In Equation (10),χ is a dimensionless function, which represents an inverse of the electron fraction. The electronnumber density is obtained from the proton fraction of the equilibrium nucleus in "cold catalyzed matter," i.e., it is determined in the ground state at T = 0 K. The data given by Douchin & Haensel (2001) is approximately fitted by a smooth function:χ whereρ ≡ ρ/ρ c . The spatial-density profile of a neutron star in r c ≤ r ≤ R is approximated as follows (Lander & Gourgouliatos 2019): The radial derivative dχ/dr = (dχ/dρ)(dρ/dr) sharply changes, owing to an abrupt decrease in the density near the surface; however,χ is a smoothly varying function of O(1). This functional behavior, which originates from the stellardensity profile that is inherent in neutron stars, is crucial in our numerical calculation. Different fitting formulae are discussed for different equation of state in Pearson et al. (2018); however, the difference inχ is not significant in our analysis. We consider the early phase of the evolution in the axially symmetric equilibrium model, in which a φ = 0. From Equation (9) at t = 0, we obtain that ∂B φ /∂t = 0, but ∂B r /∂t = ∂B θ /∂t = 0. The azimuthal component B φ changes linearly with time t, whereas the poloidal components change with t 2 . We limit our consideration to the lowest order of t only and ignore the change in the poloidal magnetic field. The early phase of the toroidal magnetic field may be approximated as where δS is a function of r and θ. Because it is associated with δB φ , the poloidal current changes; thus, the Lorentz force δf = c −1 (δj × B + j × δB) also changes. We observe that the non-zero component is δf φ because j p = B φ = 0 at t = 0, and we explicitly write Quasi-stationary Elastic Response We assume that the solid crust elastically acts against the force δf φ . The change is so slow that the elastic evolution is quasi-stationary. The acceleration ∂ 2 ξ i /∂t 2 of the elastic displacement vector ξ i is dismissed; thus, the elastic force is balanced with the change in the Lorentz force, i.e., when the gravity and pressure in Equation (1) is assumed to be fixed. The elastic force is expressed by the trace-free strain tensor σ ij and a shear modulus µ; therefore, the force balance is where incompressible motion ∇ k ξ k = 0 is assumed. Alternatively, the equivalent form is The relevant component induced by δf φ is the azimuthal displacement ξ φ only, and the shear tensors that are determined by it are The shear modulus µ increases with density, and it may be approximated as a linear function of ρ, which is overall fitted to the results of a detailed calculation reported in a previous study (see Figure 43 in Chamel & Haensel 2008). where µ c = 10 30 erg cm −3 at the core-crust interface. The shear speed v s in Equation (19) is constant through the crust: To solve Equation (15), we use an expansion method with the Legendre polynomials P l (cos θ) and radial functions k l (r), a l (r) as follows: The displacement ξ φ is decoupled with respect to the index l, owing to spherical symmetry, i.e., µ(r). Equation (15) is reduced to a set of ordinary differential equations for k l (Kojima et al. 2022): where a prime ′ denotes a derivative with respect to r. The boundary conditions for the radial functions k l are given by the force-balance across the surfaces at r c and R. That is, the shear-stress tensor σ rφ vanishes because other stresses for the fluid and magnetic field are assumed to be continuous. Explicitly, we have k ′ l = 0 both at r c and R. Note that a mode with l = 1 is special in Equation (23), and k 1 is simply obtained by integrating a 1 with respect to r. ELASTIC DEFORMATION IN BAROTROPIC MODEL The magnetic-field evolution in Equation (9) is driven by two terms, which are separately examined. We first consider the barotropic case, in which ∇ × a = 0. The evolution is driven by the first term in Equation (9), i.e., the distribution of the electron fraction. The linear growth term δS in Equation (13) is obtained using Equation (4) as In the case where the poloidal magnetic field (Ψ = g sin 2 θ) is confined in the crust, the constant K 0 is numerically obtained as K 0 = 6.0 × B 0 /(ρ c (∆r) 2 ). Alternatively, we express K 0 = 8.6 × 10 1 v 2 a /(B 0 R 2 ), where v a is the Alfvén speed in terms of B 0 and ρ 1 : The Lorentz force δf φ in Equation (14) is calculated using Equation (24). For later convenience, we consider a general form δS = y l (r) sin θP l,θ . By using an identity for the Legendre polynomials, we reduce Equation (14) to Thus, the radial functions a l−1 and a l+1 in Equation (22) are induced by y l . By numerically solving Equation (23) for k l 's, we obtain ξ φ in Equation (21) and the shear stress tensors, σ rφ and σ θφ , in Equation (18). For Equation (24), the results are expressed using a combination of k 1 and k 3 . Results The shear stress increases homologously with time, i.e., the spatial profile of the shear force is unchanged, but the magnitude increases with time. The numerical calculation provides the maximum shear stress σ max with respect to (r, θ) in the crust as follows: The maximum is determined using a ratio of the shear speed v s in Equation (20) to the Alfvén speed v a in Equation (25). Elastic equilibrium is possible, only when the shear strain satisfies a particular criterion. We adopt the following (the Mises criterion) to determine the elastic limit: where σ c is a number σ c ≈ 10 −2 − 10 −1 (Horowitz & Kadau 2009;Caplan et al. 2018;Baiko & Chugunov 2018). Thus, the period of the elastic response is limited by the constraint σ c : The breakup time t * becomes short, i.e., a few years for magnetars with B 0 = 10 15 G. This is in agreement with the recurrent time of the activity in magnetars. However, the timescale exceeds 1 Myr for most neutron stars with B < 10 13 G. Moreover, other evolution effects are important, and present results are not applicable. Energy The stored elastic energy is obtained by numerically integrating over the entire crust: where we have used R 3 instead of R 2 ∆r to normalize the crustal volume because ∆r/R = 0.05 is fixed. The elastic energy E ela increases up to ≈ 10 41 erg at the breakup time t * . The magnetic energy ∆E mag is also obtained by Here, the shear speed appears in ∆E mag because the breakup time t * in Equation (29) is used instead of τ H . The magnetic energy ∆E mag at t * is ∆E mag ≈ 2 × 10 43 (B 0 /10 14 G) −2 erg. However, it is considerably smaller than the (r-r c )/∆r ε mag (r) ε ela (r) Figure 1. Energy distribution in crust as a function of the radius. Normalized energy density ε(r) is plotted for magnetic energy, which is denoted by a dotted curve, and elastic energy, denoted by a solid curve. poloidal magnetic energy E mag,p , which is numerically calculated as E mag,p = 3.8B 2 0 R 3 ≈ 4 × 10 46 (B 0 /10 14 G) 2 erg. Note that total magnetic energy is conserved by the Hall evolution. Therefore, the same amount of polar magnetic energy decreases. However, we ignored the change in the poloidal component and its energy, which are proportional to t 2 . The ratio of Equations (31) and (32) is where µ c and B 2 0 are eliminated using v s and v a . The ratio is proportional to B 2 0 ; thus, ∆E mag decreases in more strongly magnetized neutron stars. From the viewpoint of the energy flow from the poloidal component, the breakup energy ∆E ela ≈ 10 41 at the terminal is fixed, but the ∆E mag stored in the middle depends on the Hall drift speed. The elastic energy is efficiently accumulated through toroidal magnetic energy with an increase in B 0 ; ∆E ela > ∆E mag for B 0 > 6.7 × 10 14 G. Figure 1 shows spatial-energy densities ε ela (r) and ε mag (r), with regard to ∆E ela and ∆E mag in the crust, respectively. They are normalized as ε ela dr = ε ela dr = 1. Evidently, both energies are highly concentrated near the surface r ≈ R. This property comes from the radial derivative ofχ in Equation (10). dχ/dr = (dχ/dρ)(dρ/dr) is steep there even thoughχ is O(1). The large value comes from |dρ/dr|, i.e., a sharp decrease in density near the stellar surface, and it results in a smaller evolution timescale ≪ τ H in Equation (29). Spatial Distribution Shear stresses σ θφ and σ rφ are induced by the axial displacement ξ φ . A numerical calculation shows that the component σ rφ is considerably larger than σ θφ ; (σ rφ ) max ∼ 200(σ θφ ) max . Figure 2 shows their spatial distribution using a contour map of 2µσ θφ and 2µσ rφ in the r − θ plain. The angular dependence of σ θφ is σ θφ ∝ sin 2 θ cos θ, and it is anti-symmetric with respect to the equator (θ = π/2). Moreover, σ rφ is the sum of P 1,θ and P 3,θ , and it is symmetric with respect to θ = π/2. The magnitude σ = (σ ij σ ij /2) 1/2 is also shown in the right panel, and σ is sharp near the surface, as expected from the sharp energy-density-distribution in Figure 1 We discuss the modification of the poloidal magnetic field at the core-crust boundary. Thus far, the magnetic field is expelled there. When the field is penetrated to the core, the function g near the boundary and the constant K 0 in Equation (24) are changed. The former is unimportant because the functionχ ′ is sharp near the surface, and this fact determines the result, as shown in Figures 1 and 2. The constant K 0 for the penetrated field is 4.1 × 10 −2 times smaller than that for the expelled one. Consequently, the profile is almost unchanged, but the break time t * increases by a factor of 24 with the same dipole field strength. We consider the evolution driven by the second term, ∇ × a = 0 in Equation (9), which originates from the nonbarotropic material distribution. However, ∇ × a and the corresponding magnetic field cannot be easily estimated, unless the non-barotropic property is specified. A large freedom of choice hinders our analysis. Therefore, we simply model the term ∇ × a and understand the non-barotropic effect in its magnitude and property. For this purpose, we assume a φ = 0 and where N is a constant that characterizes the non-barotropic strength, and it has the dimension of velocity. Additionally, F n is a non-dimensional radial function. We consider a small deviation from the barotropic case, for which the second term in Equation (6) is smaller than the first term. Therefore, the magnetic field is approximated using the barotropic case, i.e., the poloidal magnetic function Ψ and S = 0. This treatment constrains the normalization N in Equation (34) with respect to the magnitude. By the dimensional argument, we have |N | ≪ R/t ff ∼ 10 9 cm s −1 , where t ff is a freefall timescale. Moreover, |N | < v a and |N | < v s ∼ 10 8 cm s −1 are also likely because the crust is in magneto-elastic equilibrium. The angular dependence of Equation (34) is chosen for δS to be the same as in Equation (24). The radial function F n has a maximum that is normalized as unity, and it vanishes at inner and outer boundaries; the function is modeled as follows: where n = 1 or 3. The model with n = 1 is referred to as the "in" model because the maximum is located at r = r c + ∆r/4, whereas that with n = 3 is referred to as the "out" model because the maximum is located at r = R − ∆r/4. Results of a Simple Model We neglect the first term in Equation (9), and consider the magnetic-field evolution driven by the term ∇ × a (Equation (34)) only. Similar to the calculations in Section 3, a linearly growing shear-stress is obtained, owing to ξ φ . The period of the elastic response is limited by where n 1 is a number of the order of 10 −2 , depending on the models, as listed in Table 1. Owing to our simple modeling, the comparison between the barotropic and non-barotropic models is uncomplicated; the Alfvén speed v a in Equation (29) is formally substituted by N in Equation (36). in 8.0 × 10 −3 2.1 × 10 −3 7.2 × 10 −7 out 1.2 × 10 −2 2.4 × 10 −4 1.0 × 10 −6 ave 3.6 × 10 −3 2.9 × 10 −4 2.9 × 10 −7 min 1.9 × 10 −3 0.49 × 10 −4 0.73 × 10 −7 max 5.5 × 10 −3 7.0 × 10 −4 6.4 × 10 −7 The elastic energy ∆E ela and toroidal magnetic energy ∆E mag stored inside the crust are also summarized as follows: where n 2 ≈ 6×10 −4 , and n 3 ≈ 10 −6 , as listed in Table 1. The elastic energy ∆E ela does not depend on N ; however, the timescale (36) does. The amount of elastic energy is unrelated to the detailed process, which affects the accumulation speed in the crust. At the breakup time, the elastic energy is ∆E ela = 2 × 10 44 − 2 × 10 45 erg. This energy is more than three orders of magnitude larger than the energy (31) considered in the previous section. The difference is made clear when considering the energy-density distribution. Figure 3 shows the energy-density distribution in the crust. The difference in the toroidal magnetic energy clearly originates in the model choice; the energy density spreads more towards the interior for the "in" model, whereas it spreads more towards the exterior for the "out" model. Most of the elastic energy is localized near the inner core-crust boundary; however, the distribution in the "out" model is shifted outwardly with the second peak (∼ r c + 0.8∆r) produced by the input model. The integrated elastic energy in the "out" model becomes one order of magnitude smaller than that in the "in" model. The amount of elastic energy at the breakup clearly depends on the spatial distribution of the energy density because the shear modulus µ is a strongly decreasing function toward the surface. The elastic limit of the entire crust is typically determined using a condition to the shear σ ij near the surface. The total elastic energy ∼ µσ ij σ ij d 3 x thereby decreases as σ ij σ ij is localized towards the exterior. The breakup elastic energy ∆E ela ∼ 10 41 erg at t * in the previous section is an extreme case because the energy density is concentrated near the surface. Figure 4 shows the magnitude of shear stress σ inside the crust. The contours of σ in the two models are different. We identified that the dominant component in the "in" model (left panel) is σ θφ , which has an angular dependence that is described by σ θφ ∝ sin 2 θ cos θ. The maximum of σ is given along a line cos θ = ±1/ √ 3 (θ ∼ 55 • , 125 • ). The component σ rφ is dominant in the "out" model (right panel). Sharp peaks are localized near the surface, similar to the right panel in Figure 2; however, the localization is not as pronounced as in Figure 2. The magnitude σ, which is important to determine the critical limit, is large near the surface. Figure 4. Crust contour map of magnitude of stress tensor normalized using the maximum. The left panel for the "in" model shows σ ≈ σ θφ , whereas the right panel for the "out" model shows σ ≈ σ rφ . Results of a Model Including Higher Multipoles In a more realistic situation, the solenoidal acceleration may fluctuate spatially. We consider the sum of multi-pole components P l,θ : where λ l F n is a radial function that is randomly selected from ±F 1 or ±F 3 depending on l. As discussed for Equation (26), the radial function k l in the azimuthal displacement ξ φ is solved for the source term that originates from λ l−1 F n and λ l+1 F n ; thus, the amplitude |k l | fluctuates according to the randomness. We fix the overall constant N . Equation (39) is reduced to Equation (34) when l max = 2. Moreover, higher l-modes, up to l max = 30 with the power-law weight, are included. The power-law index is considerably steep; therefore, the dominant component is still described by l = 2. We calculated 20 models by randomly mixing λ l F n . The numerical results are summarized in the same forms as eqs. (36)-(38), and the numerical values n i (i = 1, 2, 3) in the breakup time and energies are listed in Table 1 according to the average, minimum, and maximum for the 20 models. These numerical values are of the same order as those by a single mode with l = 2 because we include higher l-modes as the correction. Interestingly, the breakup time t * generally becomes shorter than that for a single mode with l = 2 because the higher modes l > 3 are cooperative. Figure 5 demonstrates the spatial distribution of the shear stress tensor. Two models are shown using contours of the magnitude of σ. In the left panel, the sub-critical regions are on a constant θ line with a sharp peak near the surface. In the other model (right panel), a peak is observed at θ = 0 near the surface. The angular position of the peak is different between the two models. As shown in Figure 4, the spatial pattern along a constant θ comes from the component σ θφ , whereas the sharp peak near the surface is due to σ rφ . The mixing of the two types of radial functions, F 1 and F 3 , and angular functions P l with random phases and weights only complicates the spatial distribution of σ. A sharp peak is likely to be located near the surface. The outer part of the crust is always fragile; thus, the breakup time becomes shorter. SUMMARY AND DISCUSSION We have considered the evolution of elastic deformation over a secular timescale (> 1 yr) starting from zero displacement. The initial state is related to the dynamic force balance that is determined within a second. When a neutron star cools below the melting temperature T ∼ 10 9 K, its crust is crystallized. Meanwhile, the pressure, gravity, and Lorentz force are balanced without the elastic force. In another situation, the elastic energy settles to the ground state, and zero displacement occurs after the energy is completely released at crustal fracture. Therefore, the initial condition is simple and natural. When the MHD equilibrium is axisymmetric, the azimuthal component of the magnetic field increases linearly according to the Hall evolution. Consequently, the elastic deformation in the azimuthal direction is induced to cancel the change in the Lorentz force, and the shear strain gradually increases. We estimate the range of the elastic response. Beyond the critical limit, the crust responds plastically or fractures. Our calculations provide the breakup time and shear distribution at the threshold. For the barotropic case, the breakup time until fracture is proportional to the cube of the magnetic-field strength. The time becomes a few years for a magnetar with a surface dipole of B 0 ∼ 10 15 G, when the field is located outside the core. However, it exceeds 1 Myr for most radio pulsars (B 0 < 10 13 G), and the process is irrelevant to them. In addition to the field strength, the timescale is typically shortened by a factor of 10 −3 smaller than the Hall timescale because the elastic displacement is highly concentrated near the surface. The driven mechanism is related to the instability associated with electron-density gradients (Wood et al. 2014). The distribution in any realistic model of neutron-star crusts is considerably sharp; therefore, the evolution is general. Another type of Hall-drift instability occurs in the presence of a non-uniform magnetic field (Rheinhardt & Geppert 2002), which is not considered here, and its energy would be smaller, owing to the size of the irregularity. In our calculation, we do not follow the instability; instead, we estimate the energy transferred to the elastic deformation. The elastic energy at the critical limit in the model driven by the electron-number-density gradient is ∼ 10 41 erg. The amount of energy is of the same order as that of short bursts in magnetars. The breaktime of ∼ 10 years also reconciles with the observed recurrent-time of the bursts. However, the energy ∼ 10 41 erg is smaller than that of giant flares ∼ 10 44 − 10 46 erg (Turolla et al. 2015;Kaspi & Beloborodov 2017;Esposito et al. 2021, for a review). The total elastic energy derived in Section 3 is based on the electron-number density in cold catalyzed matter, i.e., the ground state at T = 0 K. If the assumption was relaxed, the non-barotropic effects might increase the total elastic energy, as considered in Section 4. When the pressure distribution is no longer expressed solely by the density ρ, the magnetic evolution is affected by the solenoidal acceleration a = ∇× (ρ −1 ∇P ) = 0. We have also considered this effect by creating the model in terms of a spatial function and an overall strength parameter, which are assumed to be constant in time in our non-barotropic model. Using the simplified model, we calculated the breakup time of the crustal failure and the energies stored in the crust. The results were comparable to those for the barotropic case. The strength parameter significantly affects the breakup time; the larger the magnitude, the shorter the breakup time. However, the amount of elastic energy at the breakup does not depend on the strength parameter, but only on the spatial function. The maximum elastic energy considerably increases up to ∼ 10 45 erg. However, the model is still primitive, and thermal evolution should also be incorporated to investigate a more realistic situation. The maximum energy has been explored thus far; however, a natural question arises. What is the fraction of energy that is released at the crustal fracture when the strain exceeds the threshold? This question is important, but at present unclear, owing to our lack of understanding of the fracture dynamics. Therefore, we present the following discussion. As depicted in Figure 5, in the realistic mixture model, a peak of shear strain σ is probably located near the surface, where the crust is fragile. Therefore, the fracture should not include the whole crust, but only the shallow crust. For this case, the released energy is not the whole elastic energy ∼ 10 45 erg, but the energy stored in the restricted region, i.e., a small fraction of the total, probably ∼ 10 41 erg. The elastic deformation driven by the Hall evolution is simulated for the first time. The critical structure at the breakup time is crucial for subsequent evolution, irrespective of plastic evolution or fracturing. The transition may appear as a burst on a magnetar. The magnetic-field rearrangement due to a mimic burst was incorporated in the Hall evolution (Pons & Perna 2011;Viganò et al. 2013;Dehman et al. 2020), without solving elastic deformation. These studies estimated the critical state based on the magnetic stress M ij . In numerical simulations, M ij changed, and the critical state was assumed when a condition among M ij reached the threshold value. Similar approximations for the elastic limit, which were derived solely from M ij , were used in a previous study (Lander et al. 2015;Lander & Gourgouliatos 2019;Suvorov & Kokkotas 2019). Mathematically, the shear stress σ ij cannot be derived from M ij without solving the appropriate differential equation ∇ j (2µσ j i + M j i ) = 0 (see Equation (17)). Therefore, previous results with the criterion M ij are questionable. Our calculation shows that the period of elastic evolution is typically 10 −3 times the Hall timescale; however, this value depends on the strength and geometry of the magnetic field. The timescale is shorter than the Ohmic timescale for B ≥ 10 13 G. The magnetic-field evolution beyond the period may be described by including the viscous bulk flow when the crust responds plastically. The effect of the plastic flow on the Hall-Ohmic evolution was considered by assuming a plastic flow everywhere in the crust (Kojima & Suzuki 2020) or by using an approximated criterion (Lander & Gourgouliatos 2019;Gourgouliatos & Lander 2021;Gourgouliatos 2022). The effect may be regarded as additional energy lost to the Ohmic decay. However, the post-failure evolution significantly depends on the modeling in the numerical simulation (Gourgouliatos & Lander 2021;Gourgouliatos 2022). That is, the region of plastic flow is either local or global when the failure criterion is satisfied. Therefore, the manner of incorporation of crust failure in the numerical simulation must be explored. Finally, further investigation is required before the elastic deformation toward the crust failure can be considered a viable model. The effect of magnetic field configuration should be considered because there are many degrees of freedom concerning it. Moreover, the outer boundary, i.e., inner-outer crust boundary or exterior magnetosphere is crucial as the crust becomes more fragile with increasing radius. Meanwhile, the electric conductivity decreases and the Ohmic loss becomes more important. By considering coupling to an exterior magnetosphere, twisting of the magnetosphere as well as crustal motion will be calculated in a secular timescale to match astrophysical observations, e.g., to describe the pre-stage of outbursts, like SGR 1830-0645 (Younes et al. 2022).
2022-09-12T01:16:13.092Z
2022-09-09T00:00:00.000
{ "year": 2022, "sha1": "17483ee8633d6712aec61631930dde509f9b8789", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "17483ee8633d6712aec61631930dde509f9b8789", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
3777376
pes2o/s2orc
v3-fos-license
Persistence of Wild-Type Japanese Encephalitis Virus Strains Cross-Neutralization 5 Years After JE-CV Immunization Background The live-attenuated Japanese encephalitis (JE) vaccine (JE-CV; IMOJEV) induces a protective response in children. A shift in circulating JE virus strains suggests that a genotype shift phenomenon may occur throughout Southeast Asia. We assessed the neutralization of wild-type (WT) JE virus isolates at distal time points after vaccination. Methods We analyzed serum samples from a subset of 47 children who had received a JE-CV booster after an inactivated JE vaccine primary immunization. We measured antibody titers (50% plaque reduction neutralization test) using a panel of WT JE strains at baseline, then after the booster at 28 days and 6 months in all subjects present at the time points and in a subset at year 5. Three additional recent isolates were tested at year 5. Results Of 47 subjects, 43 (91.5%) subjects had JE neutralizing antibody titers ≥10 (reciprocal serum dilution) against the homologous strain before JE-CV boost; all were seroprotected up to year 5 after the JE-CV boost. Baseline WT seroprotection ranged between 78.7% and 87.2%; all subjects were seroprotected against the 4 WT strains at 28 days and 6 months; year 5 seroprotection ranged between 95.7% and 97.9%. Similar rates of protection against 3 additional WT isolates were observed at year 5. Conclusions The long-term immune responses induced after a JE-CV booster dose in toddlers were able to neutralize WT viruses from various genotypes circulating in Southeast Asia and India. Clinical Trials Registration NCT00621764. Japanese encephalitis (JE) is a mosquito-borne viral infection that is considered a disease of major public health importance because of its high epidemic potential, high case-fatality rate, and the severity of sequelae among survivors. Despite the availability of effective vaccines for many decades, JE is still believed to cause >68 000 cases of encephalitis annually, resulting in approximately 15 000 deaths [1][2][3]. JE also causes major neurological sequelae, affecting more than one-third of survivors. An estimated 3 billion persons live in JE-endemic areas and are at risk if not vaccinated [3]. Expatriates and children living in these endemic areas are also at risk of contracting JE, as well as travelers and military personnel deployed overseas [4; 5, chap 4]. There is no specific treatment for JE and the virus cannot be eradicated given its animal reservoir. Hence, vaccination is the most effective approach to disease control [6]. New JE vaccines have been bringing significant improvement in terms of safety, immunogenicity, and modern updated production methods. A single serotype of JE virus has been identified, although antigenic heterogeneity has been reported between different isolates. The existence of a single serotype is also supported by detailed phylogenetic analysis [7]. Sequence heterogeneity between the different isolates was used to define 4 main genotypes based on a 240-base nucleotide sequence stretch of viral premembrane ( prM). This region was selected because it shows the largest sequence variation of the whole JE virus genome. A fifth JE virus genotype (V) recently reemerged in Asia (China and South Korea) [8][9][10][11]. Recent analyses of circulating strains have shown a shift from genotype III in favor of genotype I in Vietnam [12] and Thailand [13], suggesting that a genotype shift phenomenon may be occurring throughout Southeast Asia [9,12]. There is no indication yet that this shift toward genotype I is affecting the efficacy of the JE vaccines in use, which are all derived from genotype III viruses (strains Nakayama, P3, Beijing-1, and SA14-14-2), although some studies have shown differences in the capability of the serum samples from vaccinees who received genotype III to neutralize in vitro wild-type (WT) strains from nonvaccine strains, namely genotype I isolates [14,15]. In addition, most studies that have assessed the neutralization of WT strains in vaccinees have tested serum samples taken at the peak of the effector immune response, that is, 28 days after vaccination, when antibody titers are at their maximum [14][15][16][17][18]. We sought to assess whether responses at distal time points after vaccination were still able to neutralize recent WT isolates, including those from genotype I. MATERIALS AND METHODS Vaccine: JE-CV JE-CV, JE live-attenuated vaccine (IMOJEV; Sanofi Pasteur), is constructed by replacing the premembrane and envelope coding sequences from the yellow fever vaccine virus (strain 17D) genome with the corresponding sequences from the JE SA14-14-2 virus strain, as described elsewhere [19,20] JE-CV was developed by Acambis, now part of Sanofi Pasteur. The reconstituted vaccine was stored at 2°C to 8°C and a 0.5-L vaccine dose, containing 10 4 PFU, was administered within 4 hours of reconstitution by subcutaneous injection in the deltoid region. Phase II Study The study design, participants and procedures have been described before and the trial was registered with Clinicaltrials. gov (NCT 00621764) [17]. Briefly, this phase II, open-label crossover study of JE-CV booster vaccination with hepatitis A vaccine as a safety control was conducted in Thailand in 100 children aged 2-5 years who had been vaccinated at age 12-18 months with 2 doses of a mouse-brain derived inactivated JE vaccine (Beijing strain), according to the national immunization schedule at that time. Participants were randomly allocated to receive a single dose of JE-CV (0.5 mL) subcutaneously, followed 28 days later by hepatitis A vaccine (0.5 mL given intramuscularly), or vice versa, using an interactive voice response system based on randomization lists prepared by the study sponsor for each center and age stratum using the block method. Vaccine identity was not masked. The time between the primary immunization (inactivated JE vaccine) and JE-CV booster administration was 12.8 months (range, 5.5-38.9 months; interquartile range, 9.9-14.4 months). The immune responses against JE-CV virus were determined from serum samples obtained up to year 5 after JE-CV boost. The protocol was approved by the ethics committee or institutional review board of the 4 participating centers and the study was conducted according to Good Clinical Practice guidelines. The child's parent or guardian provided signed informed consent to the study before exposure to any study procedures. JE Viruses Serum samples from all children were assessed for antibody responses against JE-CV virus, and selected samples were also tested for responses to WT JE viruses. The samples evaluated for the current analysis were taken at baseline, 28 days after JE-CV vaccination, then at either 6 or 7 months after the JE-CV vaccination (depending on administration sequence), and at year 5. A panel of WT JE strains was used for antibody testing in the phase II study: a 1991 genotype I isolate from Korea (1991 TVP-8236), a 1983 genotype II isolate from Thailand (B1034/8), a 1949 genotype III isolate from China (Beijing), and a 1981 genotype IV isolate from Indonesia (JKT 9092/TVP-6265) [17]. WT assays were initially performed on samples taken at baseline and at 28 days, and 6 months after JE-CV vaccination in all subjects present at the corresponding time points. These 4 strains, as well as 3 additional ones, were also tested in a subset of subjects on serum samples obtained at the year 5 follow-up visit: a 2003 genotype I isolate from Thailand (JEV-SM1), a 1997 genotype III isolate from Vietnam (JEV-902/97), and a 2005 genotype III isolate from India (JEV-057434) [18]; these 3 WT strains were added to the testing schedule to assess crossneutralization to strains more recent than the original 4 WT strains evaluated. Further details on the WT isolates are provided in Table 1. A serological correlate based on JE virus neutralizing antibody titers is accepted and recommended by a panel of experts assembled by the World Health Organization (WHO); a threshold of ≥1:10 using a 50% plaque reduction neutralization test (PRNT 50 ) is accepted as evidence of protection [22,23]. Neutralization Tests Antibody titers were tested by PRNT 50 . For all WT JE strains, testing was conducted at the WHO Flavivirus Diagnostics Reference Laboratory for Asia at the Center for Vaccine Development (Mahidol University, Thailand) using serial 10-fold dilutions of the serum samples mixed with a constant challenge dose of each respective JE virus and inoculated in duplicate onto wells confluent with LLC-MK2 cells. The neutralizing antibody titer was calculated and expressed as the reciprocal serum dilution (1/dil) reducing the mean plaque count by 50%, as calculated by probit analysis. Antibody testing for JE-CV strain was performed at Focus Diagnostics using serial 2-fold dilutions of serum samples mixed with a constant challenge dose of JE-CV and inoculated in duplicate onto wells confluent with Vero cells. The assay had a lower limit of quantification titer of 10 (1/dil). Children with titers ≥10 (1/dil) are considered seroprotected against JE [23]. Statistical Methods The analysis of neutralizing antibody to JE viruses was done using the geometric mean titer (GMT) and seroprotection rate (defined as the proportion of subjects with a JE PRNT 50 neutralizing antibody titer ≥10). Assuming that log 10 transformation of the titers would follow a normal distribution, the mean and 95% confidence intervals (CIs) were calculated for log 10 titers, using the usual calculation for normal distribution, then antilog transformation was applied to the results of those calculations to provide final GMT and CI values. CIs for the single proportion were calculated using the exact binomial method (Clopper-Pearson method). The results are presented for the full analysis set (ie, for all participants present at the first vaccination visit who received ≥1 dose of vaccine and had serum samples available for WT testing at year 5). Study Population There were 100 children aged 2-5 years enrolled in the study. The subset was defined as the 47 subjects present at year 5 visits with remaining serum samples available for conducting WT assessments at this time point (these 47 subjects had been tested with JE-CV virus and the 4 original WT isolates at baseline before JE-CV immunization and at 28 days and 6 months after JE-CV vaccination). The subset of 47 was similar to the 100 enrollees in terms of age, sex, weight, height, and body mass index at enrollment (see Table 2). An equal number of subjects in the subset received JE-CV as first or at second vaccine administration in the crossover design. Immunogenicity The DISCUSSION Previous publications have described the immune response to a JE-CV vaccination in pediatric populations when JE-CV was used as a single dose for primary immunization, and as a JE booster vaccination in subjects immunized earlier with a JE vaccine (either inactivated mouse brain-derived vaccine or JE-CV) [17,24,25] These studies also demonstrated that the seroprotective response after a JE-CV booster vaccination persisted in almost all vaccine recipients up to year 5, and neutralizing antibody titers remained well above the threshold for protection [26]. A modeling approach predicted that the seroprotection rate in children after a JE-CV booster vaccination at age 2-5 years would be high for ≥10 years [27]. In contrast, limited or no data have been reported or published on the cross-neutralization of circulating WT JE virus strains over time after JE vaccination. A previous article [15] reported the immune responses in terms of seroprotection rates and GMTs against JE-CV virus and WT strains up to 6 months after a JE-CV booster vaccination. Assessment of immune responses to WT strains from all 4 JE virus genotypes is recommended as part of the evaluation of new JE vaccines [22]. We report here the cross-neutralization of a range of different WT JE virus strains from the 4 main genotypes up to 5 years after JE-CV booster vaccination. In our sample of 47 children aged 2-5 years at the time of the booster dose, with serum samples available at year 5 for testing the 4 original WT strains previously assessed up to 6 months after JE-CV booster vaccination, as well as 3 more recent WT strains, seroprotection rates remained high 5 years after JE-CV booster vaccination, with 92%-96% of vaccinees protected against genotype I strains and 96%-98% protected against genotype III strains. These results compare with a 100% seroprotection rate against homologous JE-CV virus at all postbaseline time points. The observation of a strong booster immune response, while many subjects still had titers above the 10 IU/mL WHOsupported correlate of protection, confirms that prior vaccination does not confer sterile immunity. The booster response was strong, multifold higher than the original response. Feroldi et al [25] previously described a memory response in children who received a primary vaccination of JE-CV and a JE-CV booster 12-24 months later; there was a measurable robust response within the first 7 days . This documented memory and boosting response can reassure us that the risks for clinically apparent infection with a WT virus after prior vaccination is unlikely, because it would take a few days for a neurotropic virus such as JE virus to enter the nervous system-the virus would be effectively neutralized. In fact, the analysis of vaccine viremia showed that no subjects had viremia in the boosted pediatric cohort while this was a common observation (though at very low titers) in naive adult or pediatric subjects vaccinated for the first time [17]. Two genotype I strains of different origins were tested, with similar seroprotection rates and GMTs at year 5 being observed for strains isolated in Thailand (2003 JEV-SM1) and in Korea (1991 TVP-8236). In addition, the year 5 cross-neutralization response to a newer genotype III strain (2005 JEV-057434 from India) was similar to the responses to strains isolated in China (1949 Beijing) or Vietnam (1997 JEV-902/97). The antibody titers at the 28-day and 6-month time points were similar for the 3 WT strains representing genotypes I-III, but those for a genotype IV strain were lower than all other WT isolates at 28 days, 6 months, and 5 years, although at all these time points the GMT values remained well above the protective threshold, and the seroprotection rate was high (97.9%-100%). However, for the recent WT isolates, no data before year 5 are available. As expected, and as already observed in adults [28], the lower GMTs were in response to the genotype IV strain, despite a high seroprotection rate. All currently approved JE vaccines are based on a single strain, and all available epidemiological data suggest that they are able to induce a protective response against all circulating JE viruses [29]. The vaccine strain SA14-14-2 (a parent strain of the JE-CV virus) has been characterized antigenically in animal studies using panels of monoclonal and polyclonal antibodies and JE viruses; the results showed that antibodies elicited by SA14-14-2 were able to neutralize all the strains tested [30]. In addition, protection against JE viruses belonging to the 4 major genotypes was demonstrated after passive transfer of mouse serum samples raised against JE-CV [22,28]. JE virus genotypes seem to have no relevance in terms of the elicited protection against disease but are useful for characterizing circulating JE viruses [8,31]. There is no evidence for any correlation between a genotype and location, virulence or antigenicity. The functional significance of the prM nucleotide variation and of the genotypes remains to be established. JE-CV (IMOJEV) is indicated in individuals 9 months of age or older. In pediatric populations, JE-CV is recommended as single dose for primary immunization, with a booster dose given preferably 12-24 months after primary vaccination. The cross-neutralization of WT JE strains up to 28 days after a single dose for primary immunization has been reported elsewhere [17]. Owing to the recommended immunization regimen combining a single dose primary immunization and a booster dose, the assessment of WT JE strains after primary vaccination was assessed only up to 6 months after JE-CV administration (data not shown), because it has been demonstrated that the administration of a JE-CV booster dose induces high antibody titers for long-lasting protection in children [26]. The results of the current analysis using individual serum samples are consistent with data on pooled serum samples from JE-vaccine-naive toddlers 28 days after primary immunization with JE-CV reported by Bonaparte et al [18], which showed effective cross-neutralization of the same recent WT strains from genotypes I and III that were tested in our study, and 3 other reference JE viruses. As of this writing, existing licensed and marketed JE vaccines are based on genotype III strains (eg, Nakayama, Beijing, SA14-14-2) but in contrast to JE-CV, the neutralization of WT strains in vaccinated children over time has not been assessed. Our study findings show that up to 5 years after vaccination JE-CV can neutralize WT virus strains of the 4 main genotypes circulating in Southeast Asia and India, where JE is endemic [14,15]. In conclusion, the seroprotective response after a JE-CV booster vaccination persists in almost all vaccine recipients up to 5 years and neutralizing antibody titers remain well above the threshold for protection. The serum samples obtained from JE-CV vaccinees 28 days, 6 months, and 5 years after vaccination can neutralize WT viruses of the 4 main genotypes circulating in Southeast Asia and in India where JE is endemic. Our data are useful for decision making concerning JE vaccination strategy, keeping in mind that a shift from genotype III to genotype I strains has been increasingly observed in some Asian countries.
2018-04-03T00:55:42.609Z
2016-11-03T00:00:00.000
{ "year": 2016, "sha1": "395c8ad2523d866bd718be4fcfc4cc52e8a51359", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/jid/article-pdf/215/2/221/17413136/jiw533.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "395c8ad2523d866bd718be4fcfc4cc52e8a51359", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115614096
pes2o/s2orc
v3-fos-license
Process simulation of CO2 capture from CO2‐EOR associated petroleum gas with aqueous MEA and MDEA solvents Associated petroleum gas produced by CO2‐enhanced oil recovery (CO2‐EOR) has a complex composition, high CO2 content, and is unstable. To date, no integrated process simulation for CO2 capture of the EOR associated petroleum gas has been developed. Based on the analysis of the associated petroleum gas obtained from the Shengli oilfield, a new full‐simulation model of CO2 capture from 100 000 Nm3/d of the associated petroleum gas was developed for the process design of chemical absorption of blended amine solvents and an analysis of the relationship between different process parameters and the targeted result—low circulation flow and energy consumption—was performed using Aspen Plus (Version 8.6). A 90% CO2 removal rate was achieved using 0.2 mol CO2/mol of a blended amine as solvent, with an energy consumption of 3.16 GJ/tCO2. By analyzing the influence of absorption pressure, temperature, and packing height on the circulation flow, the influence of absorption and desorption pressure on the energy consumption of the system was determined. It can be concluded that the optimal parameters are as follows: packing height of absorption column, 10 m; absorption temperature, 308.15 K; desorption pressure, 0.12 MPa; and CO2 loading, 0.2 mol/mol of lean solvent, with an absorption pressure of 0.3‐0.6 MPa. reservoirs with large oil and gas resources, CO 2 -EOR is a preferable recovery method. A fraction of the CO 2 injected into the subsurface for flooding will be released via the production of natural gas as the associated petroleum gas. The natural gas accompanied with CO 2 causes pipeline corrosion during subsequent transportation, and is particularly detrimental. 9,10 In addition, the presence of CO 2 in natural gas will reduce its calorific value, increasing the transportation cost per unit of energy. 11 At present, four methods are mainly used for CO 2 capture from the associated gas: chemical absorption, variable pressure adsorption, membrane separation, and low temperature fractionation. 12,13 Among them, the mature and efficient chemical absorption method, with a high CO 2 removal efficiency and scale-up feasibility, is most commonly used. [14][15][16] In addition, flue gas CO 2 capture is undoubtedly a short-cut for the associated petroleum gas. The difference between the two methods is the influence of the characteristics of the raw material on the process parameters. The associated petroleum gas is pressurized, with the flue gas at atmospheric pressure, so that the driving force for the flue gas is lower than that for natural gas. 17 Associated petroleum gas typically has low amount of methane (50%-70%), high amounts of nitrogen and carbon dioxide (2%-20%), and abundant hydrocarbons. 18 Specific characteristics of the associated petroleum gas are discussed in detail in the next section. During CO 2 removal from associated petroleum gas, selection of proper solvent is essential for a good chemical absorption and it also influences the removal efficiency and energy consumption of the overall process. An aqueous amine solvent is commonly composed of monoethanolamine (MEA), diethanolamine (DEA), triethanolamine (TEA), and methyldiethanolamine (MDEA). In contrast to the use of a single alkylamine solvent, recent research has shown that blended amine solvents can overcome the problem of lower absorption efficiency and higher reboiler duty, obtaining a greater balance between the two. In the past 2 years, many new chemical solvents have been developed in order to improve the stability and activity of the absorption process and reduce the energy consumption in the desorption process. Zhang et al. 19 deemed that a low energy consumption and less secondary pollution in the carbonic anhydrase (CA) absorption process are the advantages of replacing the traditional amine-based CO 2 capture method and also carried out research on the activity and stability of a related surfactant to industrial calcium enzyme. Zhang et al. 20,21 and Fang et al., 22 by comparing the performances of various absorbents in the chemical absorption method, showed that blended amines and phase-change solvents are some of the most promising new solvents. To date, the mixed solvent of MEA and MDEA has been considered ideal for the absorption of an acidic gas. 23,24 Thus, we used a blended solvent containing MEA and MDEA in 1:3 ratio in the simulation. To solve the persistent problem of energy consumption, various process flowsheet modifications have been proposed in literature. Researchers have focused on process modification and superstructure approaches, including the use of multiple solvent feeds to the absorber column, addition of auxiliary equipment such as heat pumps, and absorber intercooling. 25,26 Moullec et al. 27 proposed a large number of possible configuration patterns that can be used to enhance the overall performance and reduce the reboiler duty. Cho et al. 28 reported the design and optimization of a novel amine-based sweetening system that reduced the cost and energy consumption by 15.9%. The incorporation of a variety of configurations to study the performance and energy consumption is difficult in practice; so, the best method for measuring the energy consumption is simulation. Zhang et al. 29 and Lu et al. 30 used the rate-based model embedded in Aspen Plus to simulate the process and suggested that it was superior to the equilibrium-stage model, which was more useful for the design of CO 2 capture systems. Muhammad et al. 31 used Aspen HYSYS to simulate the effect of the acidic gas content on the total cost of the procedure. Roy et al. 32 performed a simulation of the Bakhrabad gas processing plants using the Aspen HYSYS process simulator to validate the proposed model. By comparing these two types of simulation software (Aspen Plus and Aspen HYSYS), Gutierrez et al. 33 showed that the simulations did not produce any significant difference in results, with errors of less than 15%. Based on the validation of the model embedded in Aspen Plus, it is now the most commonly used simulation method in research. This study was performed to accurately analyze associated petroleum gas and propose a CO 2 capture system. The parameters of the process equipment suitable for the pressurized associated petroleum gas to ensure normal operational and absorption efficiency in the chemical absorption method were determined while studying the CO 2 capture process through system simulation. This will be of guiding significance for the CO 2 capture in associated petroleum gas. | Associated petroleum gas The content of the associated petroleum gas flooded by CO 2 is highly fluctional, with a large CO 2 content and high partial pressure, which presents technical difficulties in CO 2 separation and treatment. The components of the associated petroleum gas in a single well in the Shengli oilfield are shown in Figure 1. Combined with other wells in the field, the gas exhibits the following characteristics: (a) High carbon content of 91%; (b) high hydrocarbon content (C 5 +) of 1.12%, accounting for 8.6% of the total hydrocarbons; (c) no oxygen or sulfide gas; (d) due to the liquid crude oil, gas separation should be performed before adding to the separation process system; (e) no water vapor components; and (f) the pressure range is 0.3-0.6 MPa(G). On analyzing the composition of associated petroleum gas, a high CO 2 content (>15%) was observed. A set of technological processes applicable to all large-scale stations in the oilfield with a scale of 100 000 Nm 3 /d was developed. According to Table 1, CO 2 and CH 4 contents in the feed gas were 20 mol% and 80 mol%, respectively. | Simulation process Most CO 2 capture processes are similar in terms of absorption and desorption columns, although some modifications are possible to reduce the energy consumption. 34 A complete flow process of the CO 2 capture system was generated by Aspen Plus (Version 8.6) and is shown in Figure 2. The extraction gas (material 1) is first processed through the pretreatment system which includes an oil and gas separation column, a heavy hydrocarbon removal system, a filter, and a gas preheater, to remove the oil and heavy hydrocarbon constituents. The gas (material 2) that is passed through the pretreatment system is then sent to an absorption column and the CO 2 in the associated petroleum gas is absorbed by the solvents. The exhaust gas (material 3) is subsequently discharged from the top of the absorption column into the processing network. The rich amine loaded with CO 2 (material 4) from the bottom of the column is pumped into the rich and lean amine heat exchanger, and the high-temperature rich amine (material 5) is subsequently returned to the desorption column. Here, the dissolved CO 2 and water vapor are cooled, separated, and removed. Simultaneously, ˃99% pure (dry basis) CO 2 gas (material 6) can be obtained, which is then transported to the sequence section. The lean amine (material 7) is discharged from the desorption column after the CO 2 is released. The lean liquid at high temperature (material 8) and the rich liquid at low temperature (material 4) pass through the pump and cooler after the heat transfer in the heat exchanger. A new lean liquid (material 10), which is a mixture of low-temperature lean liquid (material 9) and the liquid of CO 2 product gas, enters the absorption column for a new absorption process. The process of continuous absorption and desorption by the solvent is thus formed. Based on the design information provided, the main parameters related to the absorption and desorption columns are shown in Table 2. This information was used as a basis for the Aspen Plus simulation. MELLAPAK 250Y of SULZER company was chosen as the structured packing material to achieve a minimum pressure drop. A minimum energy consumption was required with many theoretical stages of separation and the packing parameters used in the simulation are listed in Table 3. | Reaction principle MDEA (methyldiethanolamine, C 5 H 13 NO 2 ) is stable and does not corrode carbon steel. Since MDEA is weakly alkaline, it is easier to desorb after absorbing acidic gases and regeneration can be performed by flash evaporation at low pressures, resulting in significant energy savings. However, the CO 2 desorption rate is low for MDEA. MEA (monoethanolamine, C 2 H 7 NO) exhibits rapid absorption rate with a small absorption capacity and some corrosive effect. This results in larger energy consumption and easy oxidization by SO 2 and O 2 . In various applications, CO 2 capture is achieved by a single amine or amine/piperazine (PZ)-promoted adsorbents. However, the removal rate of the blended MEA/MDEA absorbent was higher than that of the single component absorbent, and had additional economic benefits. The reaction of MDEA (R 2 = C 4 H 10 O 2 ) and CO 2 can be written as follows: The reaction in Equation R4 is much faster than that in Equation R1. To summarize, the addition of MEA modifies the process of absorption of CO 2 in the MDEA solvent. The CO 2 absorbed by MEA is continuously transferred to MDEA, thus imparting high absorption and desorption rates. The simulated calculations were performed with a lean amine CO 2 content of 0.1 mol CO 2 /mol MDEA and 0.28 mol CO 2 /mol MEA, and the detailed parameters are shown in Table 4. After determining the base value, other operational parameters were chosen as shown in Table 5. These parameters ensured that the simulation could be performed, providing the conditions for an accurate optimization. | Thermodynamic model The embedded models for the MEA-MDEA-CO 2 -H 2 O system provided by Aspen Plus were combined to develop a thermodynamic model. Binary interaction parameters and electrolyte pairs recommended by Aspen Plus from its physical property data bank were used for the MEA-MDEA-CO 2 -H 2 O system. 35 The Ehrlich equation of state and the electrolyte-NRTL (Non-Random Two-Liquid) method were used to compute the properties of the vapor and liquid phases, respectively. Henry's law was applied to CO 2 and CH 4 , and the Henry's constant for these species were retrieved from the Aspen Plus software. 36 The related equilibria are shown below: The temperature dependence of the chemical equilibrium constants of the reactions can be expressed as follows: where K eq is the equilibrium constant, Henry's Law constant, or salt precipitation equilibrium constant of the reaction eq, T is the temperature in K; and P ref is the reference state pressure in Pa. The constants A, B, C, D, and E are retrieved from the Aspen Plus databank. 35 | Rate-based model A rate-based model was also developed using Aspen Plus to simulate the combined MEA + MDEA absorption under the associated petroleum gas parameters listed in the previous section. The rate-based model was developed using the RadFrac module embedded in Aspen Plus, which partitioned the absorption column into stages and calculated the mass transfer, heat transfer, chemical equilibrium, reaction kinetics, hydraulic characteristics, and interfacial behavior at each stage. To properly calculate the complex absorption process using the aqueous MEA/MDEA absorbent, the rate-based model must account for the thermodynamics of the MEA-MDEA-CO 2 -H 2 O system, the reaction kinetics of CO 2 with aqueous MEA and MDEA, and the Radfrac model parameters governing mass and heat transfer. The kinetic chemistry is summarized below: The reduced power law expressions were used for the kinetically controlled reactions 28 : where r j is the rate of the reaction; k j is the pre-exponential factor; T is the absolute temperature; n is the temperature exponent; E j is the activation energy; R is the gas constant; N is the number of components in the reaction; C i is the concentration of component i; and i is the stoichiometric coefficient of component i. Material parameters Numerical value Absolute pressure (MPa) 0. 30 In addition, in order to simulate the steady state process, the model makes the following assumptions 37,38 : • The convection process involves well-mixing of the liquid and gas phases. • The vaporization of the blended solvents is neglected. • Ambient heat loss is ignored. • Both heat and mass transfer conform to the two-film theory. | Model validation The rate-based model is verified by comparing the experimental data with the simulated data, with its main index as CO 2 concentration. Borhani et al. 39 verified the reliability of the rate-based model by capturing the acidic gases, CO 2 and H 2 S, in the sweet gas in two cases, which showed that there was almost no variation between the experimental and simulation results. In the CO 2 capture process of natural gas, Salvinder et al. 38 and Emmanuel et al. 37 adopted the rate-based model for the energy consumption and process analysis. Therefore, it can be concluded that the rate-based model can reliably predict the CO 2 capture of the CO 2 /CH 4 system by the amine solvent. In the simulation, the CO 2 capture rate is set to reach 90%, with the main objective of analyzing the influence of key process parameters on the lean liquid flow and energy consumption. | Effects of the absorption column packing height and size The temperature of the prefetch absorber inlet lean solvent was set to 313.15 K. To reach the gas purity requirements, the amount of solvent needed for circulation was analyzed as a function of the packing height under different operating pressures. As shown in Figures 3 and 4, the flow rate of the lean solvent decreases with the CO 2 loading of the rich amine from 1-4 m, indicating that the solvent is not in full contact with CO 2 because of the short contact time between the gas and liquid. Therefore, when the packing height and reaction time increased, the circulation of the solvent and CO 2 loading of the rich amine changed significantly. When the packing height was ˃8 m, the reaction time in the absorption column was sufficiently long. With the increasing packing height, the reaction time changed only negligibly. Therefore, the flow of the lean amine and CO 2 loading of the rich amine remained largely unchanged after that point. According to the variation curve of the parameters under different pressures in Figures 4 and 5, as the pressure increases, the circulation of the solvent decreases and CO 2 loading of the rich amine increases, indicating that a higher pressure increases the solvent absorption of CO 2 . In addition, the flow curves shifted to the left and the lower packing heights could meet the minimum requirements of the absorption reaction, demonstrating that a high pressure improves the absorption rate. Therefore, the optimum packing height was determined to be 10 m. | Effects of the inlet temperature of the lean amine To eliminate the limiting effects of the packing height on the absorption process, 10 m packing height was selected and the influence of the absorption temperature on the circulation of the solvent and CO 2 loading in the rich amine were analyzed. It was assumed that the absorption process was largely unaffected by the packing height. Figures 5 and 6 show the variation curves of the solvent circulation and CO 2 loading with the absorption temperature. Because the reaction of CO 2 with the alcohol amine solvent is exothermic, low temperatures are advantageous. The amount of circulation solvent increases as the absorption temperature rises. When the absorption temperature increased from 298.15 to 348.15 K, the solvent circulation amount increased by approximately 1.55 times. As the absorption temperature increases, the equilibrium concentration of CO 2 in the amine solvent decreases, and the concentration of CO 2 in the rich amine decreases. In other words, the loading of CO 2 increases as the pressure increases and the amount of circulation flow decreases. It can also be seen that at 308.15 K, both circulation flow and CO 2 loading exhibit inflection points where the circulation flow reaches a minimum. Thus, a temperature of 308.15 K was chosen for the lean solvent inlet. | Effects of the pressure of the absorption column Based on the above analysis, we chose 10 m as the packing height of the absorption column and 308.15 K as the temperature of the lean amine inlet. By changing the operating pressure of the absorber, the inlet pressure of the gas and lean amine changed. By holding these parameters constant, we can analyze the trends of the amount of CO 2 in the rich solvent and solvent cycling. As shown in Figures 7 and 8, as the operating pressure increased from 0.3 to 1.0 MPa, the circulation of solvent decreased from 40.2 to 30.2 t/h, and the CO 2 loading in the rich amine increased from 0.55 to 0.65 mol/mol of solvent. High pressure causes the alcohol amine solvent to absorb CO 2 fully, resulting in a significant reduction in the amount of circulating solvent. The absorption pressure should be selected by considering the influence of the pressure change on the nature of the associated petroleum gas, energy consumption, and cost of the equipment. Therefore, the optimal operational parameters should be determined after analyzing the effect of absorption pressure on the energy consumption. | Effects of CO 2 loading of the lean solvent The CO 2 content in the lean solvent directly affects the amount of CO 2 absorbed by the solvent, which in turn affects the circulation flow of the lean solvent in the absorption column. Figure 9 shows the relationship between the circulation flow and CO 2 loading of the lean solvent. From a loading of 0.13 to 0.28 mol/mol of the lean solvent, the amount of circulating solvent increased from 32.2 to 51.51 t/h. The loading of CO 2 in the lean solvent influences the desorption energy consumption and circulation flow. Lower CO 2 loading of the lean solvent decreases the degree of regeneration of the solvent, but increases the circulation flow. Thus, a higher loading of CO 2 in the lean solvent will lower the circulation flow, but desorption would become more difficult, requiring higher desorption temperatures, leading to a higher energy consumption. Therefore, for the optimization of the CO 2 loading in the lean solvent, both circulation flow and energy consumption should be considered. Figure 10, it is clear that over the pressure range shown, with the increasing absorption pressure, the reboiler duty decreases rapidly from 1.59 to 1.21 MW, and the unit energy consumption decreases from 3.68 to 2.8 GJ/t CO 2 . It is apparent that the operating pressure of the absorber has a significant influence on the energy dissipation of the system. When the absorption pressure was increased from 0.2 to 1 MPa, the reboiler duty decreased by 23.8% and the energy consumption per unit of product decreased by a similar degree. Therefore, selection of the operating pressure of the absorber should consider the nature of the associated petroleum gas, operational cost, and energy consumption of the system. Herein, we chose 0.6 MPa as the optimal operating pressure | Effects of the desorption column pressure The operating pressure of the desorption column affects the equilibrium solubility of CO 2 in the solvent. With the increasing pressure, the reaction balance moves toward the direction of absorption. However, with more difficult desorption, more energy will be consumed. From Figure 11, the bottom temperature and desorption pressure exhibit a mostly linear relationship. When the desorption pressure was increased from 0.1 to 0.2 MPa, the bottom temperature increased from 373.86 to 389.18 K. Figure 12 shows the relationship between the pressure of the operating desorption column and the reboiler duty and energy consumption per unit product of CO 2 . When the desorption pressure was increased from 0.1 to 0.2 MPa, the increased pressure made the desorption difficult and required higher desorption temperatures; so, the temperature of the tower rose. The reboiler load increased from 1.28 to 1.71 MW, and the energy consumption per unit of product increased from 2.94 to 4GJ/t CO 2 . Considering the lower energy consumption and desorption pressure control precision, | Effects of CO 2 loading of the lean solvent The relation between the unit energy consumption and CO 2 content in the lean solvent is shown in Figure 13. When the CO 2 content was increased from 0.10 to 0.20 mol/mol of lean solvent, the desorption of CO 2 was facile. The reboiler duty and unit energy consumption decreased from 3.33 to 3.14 GJ/t CO 2 , but the circulation flow increased from 29.85 to 40.5 t/h. To ensure an efficient absorption and a reasonable circulation flow, a CO 2 loading of 0.2 mol/mol of the lean solvent was selected. | CONCLUSION Based on the chemical method of CO 2 capture, a complete set of EOR gas extraction process parameters were determined F I G U R E 1 1 Effects of the desorption column pressure on the bottom temperature F I G U R E 1 2 Effects of the pressure of desorption column on energy consumption for a novel absorbent pair. A 100 000 Nm 3 /d specification was used as a reference to obtain the detailed operational parameters. A blended solvent containing 30wt% MDEA and 10wt% MEA was used to result in a reduced energy consumption compared to the single component solvent systems. In the simulation, rate-based models were used to calculate the mass transfer, heat transfer, chemical equilibrium, reaction kinetics, hydraulic characteristics, and interfacial behaviors at each stage for both, absorption and desorption columns. Circulation flow and energy consumption are the two main parameters that were considered during optimization. For the circulation flow, the main influencing factors were the packing height of the absorption column, absorption pressure, absorption temperature, and CO 2 loading of the lean solvent. For the absorption pressures between 0.30 and 0.60 MPa, a packing layer height of 10 m of the absorption column and the absorption temperature of 308.15 K were reasonable. The maximum quantity of solvent required was approximately 39.15 t/h. In terms of energy consumption, the main influencing factors included the absorption pressure, desorption pressure, and CO 2 loading of the lean solvent. A comprehensive analysis showed that the optimal desorption pressure was 0.12 MPa and the lean fluid load was 0.2 mol/ mol, which resulted in an energy consumption of approximately 3.16 GJ/t CO 2 . This work provides a basis for improving the gas process for the subsequent chemical process. In future studies, other devices will also be optimized. F I G U R E 1 3 Effects of CO 2 loading of the lean solvent on the unit energy consumption
2019-04-16T13:29:44.260Z
2019-03-18T00:00:00.000
{ "year": 2019, "sha1": "2af0ad6cd7751ba6abf28687d79df80668ce12a5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ese3.308", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "1a3044bd42c9a41b8f16cc170d31c5c52935b1f0", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
6950493
pes2o/s2orc
v3-fos-license
The Metalloproteinase ADAM28 Promotes Metabolic Dysfunction in Mice Obesity and diabetes are major causes of morbidity and mortality globally. The current study builds upon our previous association studies highlighting that A Disintegrin And Metalloproteinase 28 (ADAM28) appears to be implicated in the pathogenesis of obesity and type 2 diabetes in humans. Our novel study characterised the expression of ADAM28 in mice with the metabolic syndrome and used molecular inhibition approaches to investigate the functional role of ADAM28 in the pathogenesis of high fat diet-induced obesity. We identified that ADAM28 mRNA and protein expression was markedly increased in the livers of mice with the metabolic syndrome. In addition, noradrenaline, the major neurotransmitter of the sympathetic nervous system, results in elevated Adam28 mRNA expression in human monocytes. Downregulation of ADAM28 with siRNA technology resulted in a lack of weight gain, promotion of insulin sensitivity/glucose tolerance and decreased liver tumour necrosis factor-α (TNF-α) levels in our diet-induced obesity mouse model as well as reduced blood urea nitrogen, alkaline phosphatase and aspartate aminotransferase. In addition, we show that ADAM28 knock-out mice also displayed reduced body weight, elevated high density lipoprotein cholesterol levels, and reductions in blood urea nitrogen, alkaline phosphatase, and aspartate aminotransferase. The results of this study provide important insights into the pathogenic role of the metalloproteinase ADAM28 in the metabolic syndrome and suggests that downregulation of ADAM28 may be a potential therapeutic strategy in the metabolic syndrome. Introduction Obesity is one of the most prevalent metabolic diseases globally and is an established risk factor for type 2 diabetes (T2D). Excess weight correlates directly with glucose intolerance and insulin resistance, which may ultimately lead to the development of T2D [1]. Recent studies have identified associations between obesity and T2D involving pro-inflammatory cytokines (tumour necrosis factor-α (TNF-α) and interleukin-6), insulin resistance, deranged fatty acid metabolism, and cellular processes such as mitochondrial dysfunction and endoplasmic reticulum stress [2]. Risk factors for obesity and fatty liver disease may include consumption of diets high in fat and fructose [3][4][5] and low physical activity [6]. A Disintegrin And Metalloproteinases, or ADAMs, are a group of transmembrane and secreted proteins which play an important role in regulating cell phenotype via their effects on cell adhesion, migration, proteolysis, and signalling [7]. These proteins have a major impact on the pathogenesis of numerous diseases. The metalloproteinase ADAM28, also known as lymphocyte metalloprotease MDC-L, was first identified on lymphoid cells and has two isoforms: (i) a membrane-type form (ADAM28m) and (ii) a secreted soluble form (ADAM28s). A number of early studies suggested that ADAM28 may be important in inflammation and metabolism [8,9]. One known substrate for ADAM28 is the pro-inflammatory cytokine TNF-α. Interestingly, a number of synthetic peptides containing the authentic TNF-α shedding site were shown to be cleaved by ADAM28 [8]. Our recent studies have confirmed that human ADAM28 is a novel sheddase of human TNF-α [9] and reinforced the notion that ADAM28 is a novel sheddase of one of the major pro-inflammatory cytokines involved in the pathogenesis of the metabolic syndrome. In our current study, we aimed to translate our previous findings [9] into an in vivo animal model to assess the effects of limiting ADAM28 activity on parameters of the metabolic syndrome. In this study, we establish that ADAM28 is pathogenic in the metabolic syndrome and provide evidence that metalloproteinase inhibition is a potential therapeutic target for anti-obesity agents. Results We have previously reported that high expression of ADAM28 mRNA in peripheral blood mononuclear cells from the San Antonio Family Heart Study (SAFHS) cohort (n = 1240) correlated strongly with parameters of the metabolic syndrome [9]. To further our previously published findings, we conducted ADAM28 expression and functional studies in our murine high fat diet-induced obesity model. Mice were weighed weekly to confirm obesity (Figure 1). High fat diet fed mice were glucose intolerant and insulin resistant compared to their chow fed counterparts, as previously reported [10]. In addition, livers of high fat diet fed mice were markedly steatotic with inflammatory cell infiltration, as demonstrated in our previous study [10]. obesity and fatty liver disease may include consumption of diets high in fat and fructose [3][4][5] and low physical activity [6]. A Disintegrin And Metalloproteinases, or ADAMs, are a group of transmembrane and secreted proteins which play an important role in regulating cell phenotype via their effects on cell adhesion, migration, proteolysis, and signalling [7]. These proteins have a major impact on the pathogenesis of numerous diseases. The metalloproteinase ADAM28, also known as lymphocyte metalloprotease MDC-L, was first identified on lymphoid cells and has two isoforms: (i) a membrane-type form (ADAM28m) and (ii) a secreted soluble form (ADAM28s). A number of early studies suggested that ADAM28 may be important in inflammation and metabolism [8,9]. One known substrate for ADAM28 is the pro-inflammatory cytokine TNF-α. Interestingly, a number of synthetic peptides containing the authentic TNF-α shedding site were shown to be cleaved by ADAM28 [8]. Our recent studies have confirmed that human ADAM28 is a novel sheddase of human TNF-α [9] and reinforced the notion that ADAM28 is a novel sheddase of one of the major pro-inflammatory cytokines involved in the pathogenesis of the metabolic syndrome. In our current study, we aimed to translate our previous findings [9] into an in vivo animal model to assess the effects of limiting ADAM28 activity on parameters of the metabolic syndrome. In this study, we establish that ADAM28 is pathogenic in the metabolic syndrome and provide evidence that metalloproteinase inhibition is a potential therapeutic target for anti-obesity agents. Results We have previously reported that high expression of ADAM28 mRNA in peripheral blood mononuclear cells from the San Antonio Family Heart Study (SAFHS) cohort (n = 1240) correlated strongly with parameters of the metabolic syndrome [9]. To further our previously published findings, we conducted ADAM28 expression and functional studies in our murine high fat dietinduced obesity model. Mice were weighed weekly to confirm obesity (Figure 1). High fat diet fed mice were glucose intolerant and insulin resistant compared to their chow fed counterparts, as previously reported [10]. In addition, livers of high fat diet fed mice were markedly steatotic with inflammatory cell infiltration, as demonstrated in our previous study [10]. ADAM28 mRNA Expression Is Significantly Elevated in the Liver of High Fat Diet Fed Mice ADAM28 mRNA and protein expression in the liver was studied, as the liver is an organ well known for lipid and glucose metabolism [11]. We demonstrated that ADAM28 mRNA levels are increased in the livers of mice fed a high fat diet for 12 weeks (Figure 2). In addition, active ADAM28 (42 kDa) protein levels were also increased in the livers of mice fed a high fat diet, as evidenced by western blotting (Figure 3). ADAM28 mRNA Expression Is Significantly Elevated in the Liver of High Fat Diet Fed Mice ADAM28 mRNA and protein expression in the liver was studied, as the liver is an organ well known for lipid and glucose metabolism [11]. We demonstrated that ADAM28 mRNA levels are increased in the livers of mice fed a high fat diet for 12 weeks (Figure 2). In addition, active ADAM28 (42 kDa) protein levels were also increased in the livers of mice fed a high fat diet, as evidenced by western blotting (Figure 3). ADAM28 Expression Is Elevated in Human Monocytes Treated with Noradrenaline (NA) Activation of the sympathetic nervous system (SNS) is a cardinal feature of obesity, metabolic syndrome, and type 2 diabetes (T2DM) and is associated with disease progression [12]. In order to test our hypothesis that sympathetic nervous system activation may result in elevated ADAM28 expression, we treated human THP-1 monocytes with noradrenaline (NA), the main neurotransmitter of the SNS. Excitingly, we have now shown for the first time that NA treatment may result in elevated ADAM28 expression in a dose-dependent manner ( Figure 4). The difference between 0 and 1.0 µM NA treatment groups was close to reaching significance (p = 0.0698). ADAM28 Expression Is Elevated in Human Monocytes Treated with Noradrenaline (NA) Activation of the sympathetic nervous system (SNS) is a cardinal feature of obesity, metabolic syndrome, and type 2 diabetes (T2DM) and is associated with disease progression [12]. In order to test our hypothesis that sympathetic nervous system activation may result in elevated ADAM28 expression, we treated human THP-1 monocytes with noradrenaline (NA), the main neurotransmitter of the SNS. Excitingly, we have now shown for the first time that NA treatment may result in elevated ADAM28 expression in a dose-dependent manner ( Figure 4). The difference between 0 and 1.0 µM NA treatment groups was close to reaching significance (p = 0.0698). ADAM28 Expression Is Elevated in Human Monocytes Treated with Noradrenaline (NA) Activation of the sympathetic nervous system (SNS) is a cardinal feature of obesity, metabolic syndrome, and type 2 diabetes (T2DM) and is associated with disease progression [12]. In order to test our hypothesis that sympathetic nervous system activation may result in elevated ADAM28 expression, we treated human THP-1 monocytes with noradrenaline (NA), the main neurotransmitter of the SNS. Excitingly, we have now shown for the first time that NA treatment may result in elevated ADAM28 expression in a dose-dependent manner ( Figure 4). The difference between 0 and 1.0 µM NA treatment groups was close to reaching significance (p = 0.0698). ADAM28 Expression Is Elevated in Human Monocytes Treated with Noradrenaline (NA) Activation of the sympathetic nervous system (SNS) is a cardinal feature of obesity, metabolic syndrome, and type 2 diabetes (T2DM) and is associated with disease progression [12]. In order to test our hypothesis that sympathetic nervous system activation may result in elevated ADAM28 expression, we treated human THP-1 monocytes with noradrenaline (NA), the main neurotransmitter of the SNS. Excitingly, we have now shown for the first time that NA treatment may result in elevated ADAM28 expression in a dose-dependent manner ( Figure 4). The difference between 0 and 1.0 µM NA treatment groups was close to reaching significance (p = 0.0698). In Vivo Knock-Down of ADAM28 Ameliorated Parameters of the Metabolic Syndrome A vast array of studies have highlighted the ability of siRNA therapy to improve numerous disease states [13][14][15][16]. We have previously shown our capacity to successfully knock-down ADAM protein expression utilising siRNA technology [17]. Our next aim was therefore to utilise siRNA targeting mouse ADAM28 to reduce ADAM28 expression in vivo in our high fat diet-induced obesity mouse model. Our results demonstrate successful reduction of ADAM28 protein expression in the liver of mice treated with siRNA targeting mouse ADAM28 (mADAM28 siSTABLE siRNA) ( Figure 5A). Interestingly, only the active form of ADAM28 was detected in the liver, suggesting all pools of ADAM28 are activated in the liver. The pro-form and active form of ADAM28 are observed in gonadal white adipose tissue ( Figure 5B). Infiltrating inflammatory cells may be contributing to this expression in gonadal white adipose tissue. ADAM28 protein levels were mildly decreased in white adipose tissue of mice treated with ADAM28 siRNA. In Vivo Knock-Down of ADAM28 Ameliorated Parameters of the Metabolic Syndrome A vast array of studies have highlighted the ability of siRNA therapy to improve numerous disease states [13][14][15][16]. We have previously shown our capacity to successfully knock-down ADAM protein expression utilising siRNA technology [17]. Our next aim was therefore to utilise siRNA targeting mouse ADAM28 to reduce ADAM28 expression in vivo in our high fat diet-induced obesity mouse model. Our results demonstrate successful reduction of ADAM28 protein expression in the liver of mice treated with siRNA targeting mouse ADAM28 (mADAM28 siSTABLE siRNA) ( Figure 5A). Interestingly, only the active form of ADAM28 was detected in the liver, suggesting all pools of ADAM28 are activated in the liver. The pro-form and active form of ADAM28 are observed in gonadal white adipose tissue ( Figure 5B). Infiltrating inflammatory cells may be contributing to this expression in gonadal white adipose tissue. ADAM28 protein levels were mildly decreased in white adipose tissue of mice treated with ADAM28 siRNA. Mice treated with mADAM28 siSTABLE siRNA also showed improvements in several parameters of the metabolic syndrome including the failure to exhibit further increases in high fat diet-induced weight gain compared to mice treated with control siRNA (non-targeted siSTABLE siRNA) ( Figure 6). In addition, mADAM28 siSTABLE siRNA treated mice displayed increased glucose tolerance ( Figure 7A) and insulin sensitivity ( Figure 7B) compared to mice treated with control siRNA. In addition, blood urea, which is indicative of kidney function was reduced in the serum of ADAM28 siRNA treated mice ( Figure 8A), suggesting that kidney function may be better preserved in ADAM28 siRNA treated mice. Liver enzymes, which are indicative of liver injury, such as alkaline phosphatase ( Figure 8B) and aspartate aminotransferase (AST) ( Figure 8C) were markedly reduced in the serum of ADAM28 siRNA treated mice. As ADAM28 is a sheddase of TNF-α protein, we measured the TNF-α protein levels by ELISA in the livers of mice treated with either non-targeted siRNA or ADAM28 siRNA. We found that reducing ADAM28 activity resulted in reduced TNF-α protein levels in the liver (Figure 9). The TNF-α protein may be cleaved or cytoplasmic derived TNF-α. Mice treated with mADAM28 siSTABLE siRNA also showed improvements in several parameters of the metabolic syndrome including the failure to exhibit further increases in high fat diet-induced weight gain compared to mice treated with control siRNA (non-targeted siSTABLE siRNA) ( Figure 6). In addition, mADAM28 siSTABLE siRNA treated mice displayed increased glucose tolerance ( Figure 7A) and insulin sensitivity ( Figure 7B) compared to mice treated with control siRNA. In addition, blood urea, which is indicative of kidney function was reduced in the serum of ADAM28 siRNA treated mice ( Figure 8A), suggesting that kidney function may be better preserved in ADAM28 siRNA treated mice. Liver enzymes, which are indicative of liver injury, such as alkaline phosphatase ( Figure 8B) and aspartate aminotransferase (AST) ( Figure 8C) were markedly reduced in the serum of ADAM28 siRNA treated mice. As ADAM28 is a sheddase of TNF-α protein, we measured the TNF-α protein levels by ELISA in the livers of mice treated with either non-targeted siRNA or ADAM28 siRNA. We found that reducing ADAM28 activity resulted in reduced TNF-α protein levels in the liver (Figure 9). The TNF-α protein may be cleaved or cytoplasmic derived TNF-α. Metabolic Benefits in ADAM28 Knock-Out (KO) Mice We used ADAM28 knock-out (KO) mice to determine if the absence of ADAM28 promoted metabolic benefits. In particular, ADAM28 KO mice are viable as adults with a normal life span. Excitingly, in agreement with our hypothesis, ADAM28 KO mice on a normal chow diet possess a reduced body mass at 10 months of age ( Figure 10A; p = 0.05335). Serum high-density lipoprotein (HDL) cholesterol levels were significantly elevated in ADAM28 KO mice at 49 days ( Figure 10B) and 6 months of age ( Figure 10C). Blood urea, which is indicative of kidney function, was reduced in the serum of ADAM28 KO mice ( Figure 10D), suggesting that kidney function may be better preserved in ADAM28 KO mice. Liver enzymes, which are indicative of liver injury, such as alkaline phosphatase ( Figure 10E) and aspartate aminotransferase (AST) (Figure 10F), were markedly reduced in the serum of ADAM28 KO mice at 49 days of age. Metabolic Benefits in ADAM28 Knock-Out (KO) Mice We used ADAM28 knock-out (KO) mice to determine if the absence of ADAM28 promoted metabolic benefits. In particular, ADAM28 KO mice are viable as adults with a normal life span. Excitingly, in agreement with our hypothesis, ADAM28 KO mice on a normal chow diet possess a reduced body mass at 10 months of age ( Figure 10A; p = 0.05335). Serum high-density lipoprotein (HDL) cholesterol levels were significantly elevated in ADAM28 KO mice at 49 days ( Figure 10B) and 6 months of age ( Figure 10C). Blood urea, which is indicative of kidney function, was reduced in the serum of ADAM28 KO mice ( Figure 10D), suggesting that kidney function may be better preserved in ADAM28 KO mice. Liver enzymes, which are indicative of liver injury, such as alkaline phosphatase ( Figure 10E) and aspartate aminotransferase (AST) (Figure 10F), were markedly reduced in the serum of ADAM28 KO mice at 49 days of age. Metabolic Benefits in ADAM28 Knock-Out (KO) Mice We used ADAM28 knock-out (KO) mice to determine if the absence of ADAM28 promoted metabolic benefits. In particular, ADAM28 KO mice are viable as adults with a normal life span. Excitingly, in agreement with our hypothesis, ADAM28 KO mice on a normal chow diet possess a reduced body mass at 10 months of age ( Figure 10A; p = 0.05335). Serum high-density lipoprotein (HDL) cholesterol levels were significantly elevated in ADAM28 KO mice at 49 days ( Figure 10B) and 6 months of age ( Figure 10C). Blood urea, which is indicative of kidney function, was reduced in the serum of ADAM28 KO mice ( Figure 10D), suggesting that kidney function may be better preserved in ADAM28 KO mice. Liver enzymes, which are indicative of liver injury, such as alkaline phosphatase ( Figure 10E) and aspartate aminotransferase (AST) (Figure 10F), were markedly reduced in the serum of ADAM28 KO mice at 49 days of age. Discussion For the first time, we have examined the role of ADAM28 in the metabolic syndrome in an in vivo mouse model. We found ADAM28 mRNA and protein levels to be higher in steatotic livers of obese mice. In addition, noradrenaline, the major neurotransmitter of the sympathetic nervous system, results in elevated Adam28 mRNA expression in human monocytes. Using siRNA technology, we also demonstrated that downregulation of ADAM28 resulted in a lack of weight gain, promotion of insulin sensitivity/glucose tolerance and decreased liver TNF-α levels in our diet-induced obesity mouse model as well as improved kidney function, and reduced liver injury . Our study also highlighted the metabolic benefits in ADAM28 knock-out mice. An increasing number of studies suggest that ADAM28 plays a crucial role in the pathogenesis of several diseases [18][19][20][21][22][23][24][25], particularly in cancers such as breast cancer [19], prostate cancer [21], B-cell acute lymphoblastic leukaemia [25], chronic lymphatic leukaemia [23], head and neck squamous cell carcinoma [24] and other conditions such as lethal acute respiratory infections [18]. However, the role of ADAM28 in colorectal cancer remains controversial [20,22]. Here, we provide evidence to show in our current functional study that ADAM28 appears to also play a pathogenic role in the metabolic syndrome. It is known that ADAM28 is expressed in immune cells in mice and humans, primarily in the B-lymphocyte lineage [26,27]. It would be interesting to conduct future bone marrow transplantation studies using ADAM28 KO bone marrow to ascertain the role that ADAM28 expression in cells of the hematopoietic lineage has in the metabolic syndrome and the aforementioned diseases. Indeed, there is increasing evidence that obesity and T2D are associated with a chronic inflammatory state and the important role of metalloproteinases in this inflammatory paradigm is being increasingly recognized [9,28,29]. We have previously documented several mechanisms by which the metalloproteinase and disintegrin domains of ADAM28 may promote inflammation and ultimately metabolic dysfunction [9]. Our group has highlighted that major substrates of the metalloproteinase domain of ADAM28 include IGFBP-3 and TNF-α which may confer adipogenesis and inflammation, respectively [9,30]. Hence, based on our current study, it is plausible that therapeutic ADAM28 inhibition may reduce adipogenesis and inflammation due to diminished IGFBP-3 cleavage and TNF-α shedding. We did indeed demonstrate that in our in vivo studies in HFD-fed mice that silencing ADAM28 expression resulted in a marked reduction in TNF-α protein in the liver. This TNF-α protein may be cleaved or cytoplasmic derived. Our previous work [9] and that from other groups [20] have reported that ADAM28 is elevated in overweight and obese humans and correlates with several parameters of the metabolic syndrome. The results in our present in vivo mouse study indicated that siRNA mediated downregulation of ADAM28 promoted decreased high fat diet-induced weight gain, increased glucose tolerance/insulin sensitivity, decreased liver TNF-α levels, improved kidney function, and reduced liver injury. We also illustrate in ADAM28 knock-out mice that body weight is decreased, levels of protective high density lipoprotein cholesterol are significantly elevated, whilst kidney function is better preserved and liver injury is reduced. Therefore, our current data further supports ADAM28's pathogenic role in the metabolic syndrome. Future studies should address the effect of ADAM28 expression on leptin levels, as leptin plays numerous beneficial metabolic roles such as appetite control [31,32]. Reducing ADAM28 levels with siRNA technology resulted in no gains in body weight, increased glucose tolerance/insulin sensitivity, decreased liver TNF-α levels, improved kidney function, and reduced liver injury. The siRNA approach would reduce all domains of the ADAM28 protein, including the disintegrin domain. The disintegrin domain may be critically involved in the pathogenic role of ADAM28. We have previously discussed how binding of the disintegrin domain of ADAM28 to integrin α4β1 and/or P-selectin glycoprotein ligand-1 on leukocytes may promote inflammation [9]. Our novel findings show that increased ADAM28 mRNA and protein expression in high fat diet-induced obesity is associated with promoting features of the metabolic syndrome in mice. Additionally, ablation of ADAM28 in mice on normal chow confers metabolic benefits. These results provide evidence that downregulation of ADAM28 could be a potential therapeutic target for anti-obesity agents. Animals All experimental and animal handing activities were performed in accordance with the guidelines of the Institutional Animal Care and Use Committee of the Royal Perth Hospital, Western Australia. Animal ethics approval (#R522/13-16) for our experiments was received from the Royal Perth Hospital Animal Ethics Committee. Eight-week-old male specific pathogen free C57BL6/J mice were obtained from the Animal Resources Centre (ARC, Perth, WA, Australia). Mice were acclimatized for 7 days, housed under a 12-h light/dark cycle, and given a standard diet with free access to food and water. In our first experiments, mice were administered a normal chow diet (14.3 MJ/kg, 76% of energy from carbohydrate, 5% from fat, 19% from protein; Specialty Feeds, Glen Forrest, WA, Australia) or high fat diet, HFD (19 MJ/kg, 35% of energy from carbohydrate, 42% from fat, 23% from protein; Speciality Feeds, Glen Forrest, WA, Australia) for 12 weeks, and body weights were recorded weekly. At the end of the experiment, mice were sacrificed and livers were collected for paraffin embedding and snap frozen in liquid nitrogen for mRNA studies. ADAM28 siSTABLE siRNA Treatment Eight-week-old male specific pathogen free C57BL6/J mice were placed on different diet/antibody treatment regiments: (1) Standard chow: administered non-targeted siSTABLE siRNA (n = 3); (2) Standard chow: administered siSTABLE mouse ADAM28 siRNA (n = 3); (3) High fat diet: administered non-targeted siSTABLE siRNA (n = 3); and (4) High fat diet: administered siSTABLE mouse ADAM28 siRNA (n = 3). The siRNA administration occurred at the end of week ten of the dietary regiment, as ADAM28 mRNA is increased at this time-point in tissues such as the liver. Mice received siRNA injections every five days via the tail vein for the final two weeks of feeding. For each time-point, 20 µg of siSTABLE siRNA (Dharmacon, Lafayette, CO, USA) was mixed with 200 µL DOTAP liposomal transfection reagent (Roche, Indianapolis, IN, USA). Body weight measurements were collected weekly. Glucose tolerance tests were performed at the start of week 12 and insulin tolerance tests were performed at the end of week 12, as indicated previously [10]. Liver and adipose tissue were collected and snap frozen. Blood urea nitrogen, alkaline phosphatase, and aspartate aminotransferase were measured in serum by PathWest LMWA (Murdoch, WA, Australia). Western Blotting Liver and gonadal white adipose tissue were homogenised in cytosolic extraction buffer containing phosphatase and protease inhibitors. Protein levels were determined using a Bradford protein assay. Cell lysates were cleared and protein concentration calculated using protein assay solution (Bio-Rad, Hercules, CA, USA). Protein lysates were solubilized in Laemmli sample buffer and boiled for 10 min, resolved by SDS-polyacrylamide gel electrophoresis on 10% polyacrylamide gels, transferred by semi-dry transfer to polyvinylidene difluoride membrane, and then blocked with 5% milk powder. Membranes were then incubated overnight at 4 • C in anti-mouse ADAM28 monoclonal antibody (sc-514228 [H4], Santa Cruz Biotechnology, Paso Robles, CA, USA); α-tubulin (Santa Cruz Biotechnology; sc-5546) or β-actin (Abcam, Cambridge, UK; ab6276) using recommended dilutions. Membranes were washed three times in washing buffer and incubated for 60 min at room temperature with either anti-rabbit or anti-mouse horse-radish peroxidase (HRP; GE, Issaquah, WA, USA), respectively. Membranes were then washed and briefly incubated in Amersham ECL Prime Western Blotting Detection Reagent (GE, Issaquah, WA, USA). The protein bands were detected using the Alpha Innotech MultiImage II Fluor Chem FC2 (Miami, FL, USA). RNA from human THP-1 monocytes and livers of mice (fed normal chow or HFD) was extracted using Trizol reagent (Invitrogen, Carlsbad, CA, USA) and cDNA synthesis was performed using the High Capacity RNA-to-cDNA kit (Thermofisher Scientific, Waltham, MA, USA). Real-time PCR to determine the mRNA abundance of human or mouse Adam28 and 18S rRNA (house-keeper gene) was performed using a Rotor-gene real-time PCR machine (Qiagen, Germantown, MD, USA) using pre-developed TaqMan probe and primer sets for human Adam28 (Hs00248020_m1), human 18S (Hs03928990_g1), mouse Adam28 (Mm00456637_m1), and eukaryotic 18S rRNA (4310893E) (Thermofisher Scientific, Waltham, MA, USA). Quantitation was conducted as previously described [40]. ADAM28 KO Mice An Academic Delta One licence agreement was obtained from Deltagen (San Mateo, CA, USA) to access phenotypic data for female ADAM28 knock-out (KO) mouse fed a normal chow diet (t137). Permission to publish the data was obtained from Robert Driscoll. TNF-α ELISA on Murine Liver Protein Liver tissue was homogenised in cytosolic extraction buffer containing phosphatase and protease inhibitors. Protein levels were determined using a Bradford protein assay and TNF-α was measured in lysates using a mouse TNF-α ELISA (ELISAkit.com, Caribbean Park, Scoresby, VIC, Australia). Statistics All in vitro and in vivo results are expressed as the mean + and/or − standard error of the mean (SEM). Data were analysed for differences by Students t-test for unpaired samples where appropriate. Data was considered to be statistically significant when p < 0.05. T values were also calculated to further verify significance (Table S1). Conflicts of Interest: The authors declare no conflict of interest.
2018-04-03T02:36:41.104Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "6980aa2333ec35dfd28db2cd513e14a7cbdc6264", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/4/884/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "527d3a43266a25fda4c71329046cc8598df64527", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
213394969
pes2o/s2orc
v3-fos-license
Systematic Review Study: A Comparative Analysis of the State of the Art of Green Criminology Understanding the complex phenomena related to environmental damage requires multidisciplinary analyzes capable of producing different scientific perspectives. Based on the problems arising from issues involving environmental damage to natural resources, Flores, Konrad and Flores (2017) evaluated the studies about green criminology globally, using articles retrieved from indexed publications and available in digital databases. The present study aimed to survey the contributions that were added since the previous survey, to contribute to the understanding of what constitutes the essence of green criminology. To meet this purpose, we analyzed scientific articles published after the period investigated by Flores, Konrad and Flores (2017), which also allowed the comparison with the existing data. We employed a qualitative approach and the methodological procedure was a systematic bibliographic review. We concluded that in the last three years there has been an advance in studies about green criminology. Still, there has been a limited number of studies published in this area. Research on the theory may be limited by political and geographical issues that inhibit the plurality of spatial and local perceptions regarding the theme. Also, they restrict preventive actions to environmental damage due to lack of knowledge and research, as the theory acts directly on preventive measures and protection of nature. Introduction The global environmental conjuncture increasingly demonstrates the yearnings for environmental protection. This scenario is reflected in international agreements, treaties and conventions that preach rules among the signatory countries, aiming at environmental conservation in the process of sustainable development. Therefore, it is necessary to have a harmonious integration between socio-spatial aspects and natural characteristics, thus revealing an environmental justice (Ioris, 2009;Jacobi & Giatti, 2015;Pellizzaro, L., Hardt, C. Hardt, M. Hardt & Sehli, 2015). Given this scenario, it is understood that the issues regarding environmental degradation require multidisciplinary analyzes capable of giving different scientific perspectives to understand the complex phenomena related to environmental damages. Criminology emerges in the environmental sciences, assuming that environmental damage is an area of criminological research. It leads to the contemporary phenomenon called -green criminology‖, a concept that has been increasingly used in reference to environmental crimes, damages, laws and justice (Fitzgerald & Baralt, 2010;Hinojosa, 2012;Milaré , 2013;Sanchez, 2013;South, Brisman & Mcclanahan, 2014;Lynch, 2017;Nobles, 2019). Given the growing knowledge on the issues involving environmental damage to natural resources, Flores, Konrad & Flores (2017) conducted a bibliographic and qualitative analysis involving worldwide studies about green criminology. This survey was based on indexed publications available in digital collections. The authors inventoried the scientific outputs subsidized from the scientific typology of studies related to the -theme, document authorship, institutional affiliation of researchers, spatial identification of the occurrence of discussions about green criminology in the global scenario, and chronological delimitation of publications‖ (Flores;Konrad & Flores, 2017 p. 270), using as a milestone the year 1990, when the theory received the mentioned nomenclature. The aim of this research was to evaluate the state of the art of green criminology based on the analysis of studies published after the period investigated by Flores, Konrad & Flores (2017), and to compare the findings. The approach was qualitative, and the methodological procedure adopted was a bibliographic systematic review (Sampaio & Mancini, 2007). Green Criminology: The Phenomenological Understanding Green Criminology, introduced with this nomenclature more than two decades ago by Lynch (1990), is a growing area of criminology expertise that focus on environmental damage and its scope, distribution, control and consequences (for human beings, nonhuman species and the ecosystem). Originally, green criminology was established from the understanding of how economic and political relationships promote green crime and environmental damage by affecting legal definitions, social control, the production, distribution and threats of toxic waste (Mcclanahan, & Brisman, 2015;Lynch & Stretesky, 2016). The literature on green criminology has expanded to include theoretical, qualitative and quantitative studies dealing with the causes, consequences and control of damage and green crime. These studies have focused on food crime; genetic food modification; agricultural chemicals; crimes against animals; illegal trade and transnational environmental crimes; issues related to environmental justice; environmental crime; law and social control; and even specific issues like global warming Lynch & Stretesky, 2016;Goyes & South, 2017). The green criminology expanded the scope of criminology, drawing attention to acts of -green violence‖ that have commonly been ignored in the traditional criminological literature. For inspiration, ecology-based criminology draws on observations of the scientific literature outside the conventional criminology, and uses empirical basis to identify damage. Thus, green criminologists exploit environmental damages that are explicitly defined as illegal by criminal laws, as well as damage that is technically legal but certainly harmful. Green criminology is multidisciplinary and covers the environmental and political sciences, epidemiology, medical literature, geography, sociology, among others. In view of this, green criminology has been described as a perspective rather than a theory, so there is no unified -theory‖ of this concept (Lynch & Stretesky, 2014;Lynch & Stretesky, 2016;Goyes & South, 2017). The term green criminology is not easily categorized (Lynch & Stretesky, 2014) as it brings together several subjects, as well as broad theoretical and ideological conceptions. Thus, green criminology is a generic expression given to the branch of criminology concerned with the general neglect of ecological issues, within the criminal science. It proposes the incorporation of green perspectives into conventional criminology. The authors point out that they are -disturbed by the fact that, as a discipline, criminology is unable to grasp the wisdom of taking green damage more seriously‖ (Lynch & Stretesky, 2014 p. 2). This criminological typology reveals some peculiarities regarding its denomination, as there is no unanimity regarding its nomenclature (South, 2014). Mostly, the term "green criminology" is used to refer to the study of crime, injustice and environmental damage. The term has a plural conceptual understanding as natural, artificial, cultural and work-related. Moreover, the author mentions that in the international scenario there are other terminologies used to address the concept, such as: Conservation Criminology; Eco-crime; Eco-global Criminology; Environmental Criminology, as depicted in figure 1. Green criminology, the main term used internationally, takes on interdisciplinary and multidisciplinary approaches to analyze broader environmental crimes and harms, which are often overlooked by the ruling criminology (Nurse, 2017). It therefore redefines criminology, in the sense that it no longer relates only to the crimes or social harm falling within the scope of criminal justice systems. It applies a -green‖ perspective to environmental crime and ecological justice, and includes the study of environmental laws and criminality, which take into account damage that affects the environment and non-human nature (Nurse, 2016;Nurse, 2017). The green criminology analyzes the behaviors harmful to nature, theoretically and empirically, distinguishing between the actions of primary impact (i. e., that cause direct effects to the degradation of natural resources), and the secondary ones (in the mediation level; the consequences of environmental damage, such as illegal food, medicine and drinking water markets). Also, environmental issues under the bias of green criminology are classified according to damage typology Brisman & Mcclanahan, 2014;South & White, 2014). This typology consists of a color representation -called by the authors as -Coloring Environmental Issues‖represented by brown, green and white ( Figure 2). Brown depicts the means of pollution, green refers to environmental concerns, and white presents the impacts of technologies (South & White, 2014 p. 17, 18). In the scenario of the specificities and classification of environmental damage, the objective is to make a thorough understanding of crime, in order to achieve environmental preservation. The analysis of green crimes enables the proper law enforcement, provides integrated and grounded management of criminological environmental issues, and encourages a precautionary approach to the protection of natural resources (South & White, 2014;Nurse, 2016;Nurse, 2017). Green crime is a fast moving, somewhat contested area, where academics, policy makers and practitioners often disagree on how green crimes should be defined, the criminal nature involved, potential solutions to green crime problems, and the content and priorities of the policy that should be adopted ( Nurse, 2016). Within the ecological justice discourse, for example, there may be consensus that harm to animals and the environment should be addressed. However, whether green crimes are best addressed through criminal justice systems or through civil or administrative mechanisms is still debatable (Nurse, 2016). There is a central discussion within green criminology as to whether green crimes should be viewed as the focus of dominant criminal justice and treated by central criminal justice bodies such as the police, or whether they should be considered outside the mainstream (Nurse, 2017). For traditional criminology, restrictive notions of police, policing by state and crime institutions are seen only by the predominance that criminal law determines (Lynch, Long, Barrett, & Stretesky, 2013;Lynch & Stretesky, 2014). However, even though environmental damage is a major threat to human survival, green crime is often overlooked by the main justice systems. Consequently, ecological criminology extends beyond th e focus on street and interpersonal crime to encompass the consideration of the -destructive effects of human activities on local and global ecosystems‖ (Lynch & Stretesky, 2014 p. 1). Ecology-based criminology considers not only the issues of crime defined by a strict idea of criminal law, but also examines issues relating to rights, justice, morals, victimization, criminality and the use of administrative resources, civil and regulatory justice systems. Green criminology also investigates the actions of non-state criminal justice actors such as Non-Governmental Organizations (NGOs) and civil society organizations, and the role of the state as a major contributor to environmental damage (Lynch & Stretesky, 2014;Nurse, 2017). State crime is a concern of green criminology, particularly regarding the responsibility to protect the environment •Animal testing and experimentation -Nanotechnologies and natural resources and the resultant damage when states fail to fulfill these obligations. For example, environmental resources such as water and fisheries are trusted to the public and therefore there is a broad responsibility of using these resources in the public interest. Water pollution provides an example of green victimization and how green crimes occur routinely in societies. For instance, several studies examined how states and corporations have turned water sources into commodities something that can be owned or leased and subsequently exploited (Lynch, Long, Barrett, & Stretesky, 2013;Lynch & Stretesky, 2014;Johnson;South & Walters, 2016;Stretesky & Long, 2017;Nurse, 2017). Based on this idea, Johnson, South & Walters (2016) identified how the privatization of water in some jurisdictions allowed corrupt companies and states to exploit a fundamental human right. At a basic level, examining the extent and control of water pollution by statutory state treatment facilities demonstrates how state failures in the use of water resources can constitute a state crime. The example above illustrates well the concern of various green criminologists regardi ng how neoliberal markets, capitalist systems, and the activities of other corporate legal actors can cause significant environmental damage, thus sometimes constituting environmental crimes (Ozymy & Jarrell, 2017). A proof of this is the low level of prosecution for polluting activities, which reveal the diffuse structure of the US environmental regulatory regime and the lack of government databases, which makes the empirical assessment of environmental crimes and law enforcement efforts particularly difficult. (Ozymy & Jarrell, 2017). From this perspective, Lynch (2017) argues that treating pollution broadly, from the concept of "within the law" is a matter of conjecture. To find out what can be considered correct, you should look at emission data, identif y pollution patterns, make scientific references and collect evidence of local emissions. This issue is a longstanding concern for radical political economists. Ecological Marxists detail the contradiction between capitalism and ecology and note that the solution to this contradiction is a transition beyond capitalism, since its vision is based on continuous expansion, i. e., growth upon environmental destruction. Countless critics who defend the continuous growth and free-market capitalism argue that technology is the alternative solution. But it must be acknowledged that technology still needs to make its claims. This is evident, considering the increasing ecosystem destruction measures since the 1970s, and considering that carbon dioxide emissions have increased by over 80% since then (Lynch, 2017). From the capitalist point of view, technology promotes economic growth, thereby undermining the intent of technology designed to restrain environmental damage by increasing the impact over time as production co ntinues to expand. At the micro level, the damage to any individual production may decrease, while at the macro level, ecological damage continues to expand as capitalism grows. While technological solutions sound attractive (because they lead to growth) as individuals can stop worrying about consumption and impacts on the ecosystem, they cannot address specific dimensions of the environment (Lynch, 2017). The problems stated above governed by the known rules of science, and no matter how hard they try, humans cannot reverse those rules. The solution is to consume less and establish a new view on the society-nature relationship that accepts zero growth as beneficial to a socioenvironmental proposal. Besides this solution, limitations to growth and consumption could be proposed (Lynch, 2017). There is a fact that needs to be recognized, that the rich produce far more environmental damage through overconsumption. Thus, governmental limitations on consumption and income, and the progressive taxation seen in some nations, could be a way of controlling harmful effects on nature. Many solutions have been proposed in a broad literature. However, the point is, the proponents of capitalism scold these claims, undermine them and ensure that growth continues unaffected. Therefore, even though solutions exist, the economically and politically -powerful‖ do not want to see them implemented (Lynch, 2017). Another concern of green criminology is wildlife crime, especially, wildlife trafficking and illegal wildlife trade, including those threatened with extinction, according to Nurse (2015) and Essen et al. (2016). The illegal killing of wild animals, particularly in agricultural and livestock areas, has recently caught the eye of green criminology scholars. The killing of large predators such as wolves and lynx has been characterized as a form of resistance, illustrating the conflict between conservation and animal protection ideologies versus the needs of traditional communities. While most states have laws to protect wildlife from human predation, hunting often remains a legal and regulated activity, and in many situations, it is approved by the community and thus constitutes a kind of organized crime. What will determine how states implement sanctions and show concern about species justice is the way the state deals with this issue (Nurse, 2015;Essen et al., 2016). Green criminology also analyzes mechanisms to stop and prevent environmental crimes and reduce damage to animals and the environment. In cases of environmental damage, traditional models of patrolling, seizure and punishment are likely to be inadequate because irreparable environmental impacts or loss of animal life may already have occurred. Similarly, traditional justice systems are also often inappropriate to mit igate the damage to nature (Hall, 2017). In this sense, Hall (2017) advocates the use of approaches based on restorative justice and mediation, as the author believes that these provide alternative mechanisms for human and non -human victims of environmental crimes. These alternatives form an integral part of green criminology's critical approach to preventive enforcement, which is to prevent damage. As highlighted by Nurse (2017), green criminology stands as a discipline that considers criminal issues not only as defined by a strictly legalistic conception of criminal law, but also considers issues related to rights, justice, morals, victimization, criminality and the use of criminal law, administrative, civil and regulatory justice systems. Therefore, it is understood that the constitution of an alternative criminology, as proposed by South (2010) and focused on the mitigation of environmental damage and injustice, requires a new academic perspective, as well as a new global policy (Nurse, 2016;Nurse, 2017;Hall, 2017;Nobles, 2019). Regarding the procedure of search and selection of publications, the study used the advanced search by subject using the string "green criminology". This search was restricted to the title and subject of publications. The work proceeded with the reading of abstracts, keywords and the complete content of the publication, when necessary, to construct the theoretical framework exposed in this article, thus constituting a systematic bibliographic analysis of publications (Sampaio & Mancini, 2007;Wemrell, 2019). In this context, the systematic review uses as a data source the publications related to a certain theme (Wemrell, 2019), thus providing a summary -of the evidence related to a specific intervention strategy, through the application of explicit and systematic search methods, critical thinking and synthesis of selected information‖ (Sampaio & Mancini, 2007 p. 84). Results and Discussion The main results of the bibliographic survey and systematic analysis of the publications available in virtual collections and belonging to the green criminology theme are presented below. Sample Publications by Database and Chronology According to Graph 1, the string -green criminology‖ appears in 18 databases included in the collection of CAPES Portal, over the course of 3 years (2016−2019). Two databases -Springer Science and SAGE Publications-were the source of more than 20% of the total number of documents studies (30 documents According to the data presented by Flores, Konrad & Flores (2017), showed in Table 1, the string -green criminology‖ was found in 13 databases from the CAPES Portal. However, their study covered 26 years (1990 until early 2016) and retrieved 69 documents. By comparing the data, it was observed an increase of five databases, which demonstrates a greater diffusion of studies. It is also understood that the publications were made available on multiple international databases, which was also evidenced by Flores, Konrad & Flores (2017). With the increasing diffusion of publications, there is also -a greater accessibility for researchers, students and professionals in the field‖. However, given that most digital collections are paid services, the number of users is still limited (Flores, Konrad & Flores, 2017, p. 274). Therefore, in this regard, it is still concluded that the dissemination of the green criminology theory depends on the availability in a larger number of virtual collections (Flores, Konrad & Flores, 2017). Although a large amount of available collections has been found, many of them are still linked to payment systems, which may hinder the access by researchers of different nationalities. Regarding chronology, 30 publications referring to -green criminology‖ were found published in the time period established for this study (January 2016 to June 2019). In comparison, Flores, Konrad & Flores (2017) found 69 publications over a 26-year period. This result shows a great increase in publications related to the theme over the last years. The -temporal dynamics of the occurrence of publications remained stagnant until 2007, experiencing gaps in 1999, 2000, 2002, 2005and 2006‖ (Flores, Konrad & Flores, 2017. The authors also pointed out that since 2008 the theme has been evolving, although it has not been linear since -it increased, especially in 2009 and 2013, the latter representing the most productive period in terms of number of publications ‖ (Flores, Konrad & Flores, 2017, p. 278). Despite that rise, the number of publications declined in 2010, 2011 and 2012, -however, the progressive growth of publication number and associated references indicates the development of communications in the field of green criminology over the past eight years‖ (Flores, Konrad & Flores, 2017, p. 278), as shown in graph 2. Graph 2. Annual chronology of publications according to Flores, Konrad & Flores (2017) It should be noted that the theme had a greater visibility in international discussions from 2016 to 2019, paving the way for a sequence of publications related to green criminology. It was also observed that the subject expanded within the area of green criminology. Among the dissemination of new terminologies and concepts presented in the publications, some terms drew attention at the level of speculation, such as: green militarization; ecofeminism; critical environmental criminology; green cultural criminology; green violence; feminist criminology. Scientific Publications by Authorship and Typology Regarding authorship, its identification made it possible to verify the frequency of publications with individual authorship or co-authorship, and the researchers who stood out. Thus, our research sample had 28 authors, of which 18 had a single publication, four had two publications, and five had more than two. Although the results of our research identified 28 authors, 32.14% of the documents analyzed were authored by the same group of researchers (Graph 3). These data demonstrate that there was no homogeneous distribution of publications per author. This finding is similar to those presented by Flores, Konrad & Flores (2017 p. 276), since in their research 44 authors were found but 30% of the publications belonged to a dingle group of researchers. Graph 3. Authors with multiple publications: quantity and percentage in relation to the sample Source: Authors. Authors who have the highest percentage of publications also publish in co-authorship, which enriches their studies with the diversity of ideas brought by the plurality of authors. According to Flores, Konrad & Flores (2017, p. 276) studies with multiple authors indicate a tendency to establish links with other researchers, aiming at developing studies in the areas of practice, -allowing the sharing of information as well as the improvement of the study through different perceptions of the object of analysis‖. The results about the variable typology are presented in Graph 4. Four types of publications were found, classified according to the CAPES Portal: article, book, book chapter and dissertation (Graph 4). This differs from the typologies found by Flores, Konrad & Flores (2017), who found eight types: article, book review, book chapter, book, reference entry, thesis, textual feature and report (Graph 5). Graph 4. Scientific publications by type Source: Authors. In both surveys, the highest percentage of scientific publications related to green criminology were articles, representing 90% of this sample and 46% of the documents presented by Flores, Konrad & Flores. Still, Assuming that indexing is the central pillar that guides the degree of credibility of scientific research, it is predicted that the articles under analysis went through strict acceptance criteria (peer review), giving quality, reliability and originality to the study (Flores, Konrad & Flores, 2017, p. 275). In addition, it was observed that another evaluative criterion of journal fixation was the impact factor, present in most citations of scientific articles, and something that indicates the representativeness of publications in their areas. According to Flores, Konrad & Flores (2017, p. 275) the theory of green criminology, although emergent, -is published in journals with a high degree of scientific reliability and direct adherence in the area of criminology‖, which denotes the use of -ethical principles, significantly impacting the investigation of the theme‖. Institutional Affiliation and Spatial Distribution Of the 28 authors, 25 are affiliated to universities, two to institutes and one to an education center. These data are similar to those portrayed by Flores, Konrad & Flores (2017). Another similarity refers to the fact that none of those authors has direct or indirect ties to the public administration. This fact was noted by Flores, Konrad & Flores (2017) as something that concerns the implementation and adoption of public policies, in order to reflect on the prevention of environmental damage. In terms of spatial distribution, the studies were authored by researchers from seven countries. The highest concentration was in the United States, making up 48% of the total (Graph 6), followed by Australia and Brazil (15%), England and Austria (7%), and Norway and Spain (4%). Graph 6. Occurrence of publications by countries Source: Authors. Similar to the survey by Flores, Konrad & Flores (2017), the United States is the country with most research in the area. However, the data came as a surprise when comparing them with the sample of the authors, since studies appeared in three new countries: Austria, Spain and Brazil. This result draws attention to the fact that a country classified as semi-peripheral is launching studies on the theory, which was not mentioned in the previous survey (Arrighi, 1998;Ouriques & Viera, 2017). However, researchers with multiple scientific productions are concentrated in developed countries, specifically, in the United States, Australia, England, and Austria. The absence of discussions of emerging theory denotes a connection with possible political and geographical issues (Flores, Konrad & Flores, 2017). There is solid evidence that research cooperation between institutions, regions or countries increases the visibility, quality and impact of the resulting publications. Bibliometric indicators of scientific production constitute a widely used methodology, especially by European researchers. In the author's words this phenomenon has been attracting the attention of decision makers as a way to foster excellence in research in various parts of the world (Nassi-Calò, 2015). Conclusions The information obtained through this research provided an overview of the profile of publications related to green criminology. From the systematic review and comparative study, it was possible to conclude that the literature on the subject has become wider in the last three years. Proportionally the number of publications increased significantly in this period, taking into account that the interval studied by Flores, Konrad & Flores (2017) comprised 26 years. Only two documents were not in English, the others were published in full in English and made available in multiple databases, allowing a greater accessibility. The sample by type revealed that the "scientific article" modality has the largest number of documents, published in journals indexed in various databases, which denotes a high degree of scientific reliability, positively impacting the investigation of the theme. In terms of authorship, the number of authors increased in terms of plurality, when compared to the previous study. However, the documents remain concentrated around a certain number of authors, which was also identified by Flores, Konrad and Flores. The co-authorship was very present, characterizing a profile of multiplicity of perceptions of the theory under analysis. This characteristic was perpetuated during the investigated periods, as it was already observed by the authors Flores, Konrad & Flores (2017). As for the spatial distribution, unlike previous research in the Americas, countries considered peripheral or semi-peripheral (Brazil) appeared in our inventory. The core nations continue to lead the percentage of research in green criminology, the focus of scientific productions on the subject. It is concluded that in the last three years there has been an advance in studies about green criminology, however, there are still few studies published in this area. It can be understood that research related to the subject may be limited by political and geographical issues, thus inhibiting the plurality of spatial and local perceptions regarding the concept. These limitations restrict the prevention of environmental damage due to lack of knowledge and research, since the theory acts directly on the precaution and protection of nature. They also harm the semi peripheral countries, because they idealize a greater economic development. Consequently, they need to exploit their natural resources and, without research to do this properly, commit environmental crimes or extrapolate ecological limits. In this sense, green criminology stands as a proactive tool for reflection and decision-making, regarding crimes, damages, laws and environmental justice.
2020-01-23T09:09:04.591Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "49c28b028ac77e754f971a21be16f0f4c978827f", "oa_license": null, "oa_url": "http://redfame.com/journal/index.php/ijsss/article/download/4652/4881", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bfccf9bd7ebf91a83a6abf7d26559f7c2a1103b3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Sociology" ] }
73676105
pes2o/s2orc
v3-fos-license
Ultrasound-assisted Extraction of Oil from Calophyllum inophyllum Seeds : Statistical Optimisation using Box-Behnken Design Ultrasound-assisted extraction (UAE) of oil from Calophyllum inophyllum Linn seeds was studied and the effects of four factors (extraction time, ultrasound power, extraction temperature, and liquid to solid (L/S) ratio) on the oil yield were optimised by using a statistical tool. Specifically, the optimisation was carried out by employing a BoxBehnken statistical experimental design. The experimental data were fitted to a quadratic model using multiple regression analysis giving high determination coefficient value (R) of 0.984. The predicted oil yield was optimum (56.2%) when the extraction were conducted for 21 min, 210 W ultrasound power, 42°C extraction temperature and 21 ml/g L/S ratio. Based on the model summary statistics, the experimental values agreed closely with the predicted values, indicating an excellent fit of the model used. The results indicated that Response Surface Methodology (RSM) was effective for optimising the UAE conditions of oil from C. inophyllum seeds. INTRODUCTION Calophyllum inophyllum Linn is an oilseed ornamental evergreen tree, commonly known as Penaga Laut in Malaysia.It belongs to the Clusiaceae family having average height of 8-20 m with a broad spreading crown of irregular branches.2][3] A number of medicinal and therapeutic properties have been described to various parts of Calophyllum multipurpose tree, including the treatment of rheumatism, varicose veins, hemorrhoids and chronic ulcers. 4he seeds of C. inophyllum have a very high oil content and most of them are unsaturated oleic and linoleic acid. 5The oil does not only possess antiinflammatory, antimicrobial and antiaging properties, but it can be used to treat diabetic sores, anal fissures, sunburn, dry skin, blisters and sore throat. 4,6The pain relieving properties of C. inophyllum oil has been used traditionally to relieve neuralgin, rheumatism and sciatica. 6aditional techniques of extraction, such as heating, boiling or refluxing, used for the solvent extraction of natural products are associated with longer extraction times and lower yields, use of large amount of organic solvents, and poor extraction efficiency. 7Thus, developing an optimised novel extraction technology is necessary in pharmaceutical, food and cosmetic industries. Recently, ultrasound-assisted extraction (UAE) has been developed as a novel technique to extract oil from plants.][10][11][12][13] This technique has been considered as a desirable method of extraction offering many advantages.Such advantages include less extraction time, low extraction temperature, and high extraction efficiency. 14In addition, it is inexpensive, environment friendly and simple to operate. 15,16The mechanism of ultrasonic enhancement is mainly attributed to behaviour of cavitation bubbles upon propagation of acoustic waves.The collapsing of these bubbles can produce chemical, physical and mechanical effects which result in disruption of material matrix, facilitating release of extractable compounds and enhancing mass transfer of solvent into the sample thus increasing the release of target compounds from matrix into the solvent. 17sponse surface methodology (RSM) is a collection of mathematical and statistical techniques for designing experiments, building models, evaluating the effects of several factors, and obtaining optimum condition of factors for desirable responses.RSM provides the relationship between one or more measured dependent responses and a number of input factors. 18The optimisation process involves studying the response of the statistically designed combinations, estimating the coefficients by fitting it in a mathematical model that fits best the experimental conditions, predicting the response of the fitted model, and checking the adequacy of the model. 19x-Behnken design (BBD) is one of the most common RSM tools, which has been widely used by researchers for optimisation of experimental trials.It is more efficient to conduct experiments using a BBD over traditional methods because it simplifies the complexity of the experimental trials needed to evaluate multiple variables and their interactions. 20BBD is not only capable in determining the accurate optimum values of experimental parameters but also provides the possibility to evaluate the interaction between variables with a reduced number of experiments. 21BBD does not contain combinations for which all factors are simultaneous at their highest or lowest levels.It is also useful in avoiding experiments performed under extreme conditions, for which unsatisfactory results are often obtained. 22Its other advantages including the following: less number of experiments involved; suitability for multiple variables, which can reveal possible interactions between variable; relativity search between multiple variables; and finding the most suitable correlation and forecast response. 18en though UAE technique offers many advantages, the feasibility of using UAE for the extraction of C. inophyllum seed oil has not yet been explored in the literature.Furthermore, to our best knowledge, reports on optimisation of extraction conditions on C. inophyllum oil using RSM are very limited.Hence, the objectives of our study is to investigate and optimise the effect of UAE process variables such as extraction time, ultrasonic power, extraction temperature and liquid to solid (L/S) ratio on the yield of C. inophyllum oil using Box-Behnken response surface design.The optimised controlled conditions determined in this study should offer important reference values for any subsequent studies. Material C. inophyllum fruits were collected from Taman Kerian, Parit Buntar, Perak, Malaysia.The species was identified by Dr. Rahmad Zakaria (USM Herbarium 11565) from the School of Biological Sciences, Universiti Sains Malaysia.Prior to extraction, the fruits were slightly crushed to obtain the seeds.Then, the cleaned seeds were ground in a laboratory mill and sieved using a 10-mesh (pore size 2 mm) sieve.Analytical grade n-hexane (Merck) was used as extraction solvent. Moisture Content of the Seeds The moisture content of the seeds was determined by oven drying method at 105 ± 1°C for 24 h. 23The moisture content (wet basis) was calculated as: where i m and d m is the initial and final mass of the seed (g), respectively. 24 Ultrasound-assisted Extraction The extraction was performed using an ultrasonic water bath (Transsonic Digital S Model T 840DH), with internal tank dimension 327 mm × 300 mm × 200 mm, volume 18 l, having a power consumption of 1100 W and a fixed operating frequency of 40 kHz.It is equipped with adjustable power output from 70 to 250 W. The ultrasonic bath was filled with water approximately 2/3 of its volume.The seed sample (5 g) and the extracting solvent, n-hexane was placed in an Erlenmayer flask (250 ml) covered with aluminium foil.Then, the flask was immersed into the centre position of the ultrasonic bath and this position was kept constant throughout the experiments.During extraction, the temperature was controlled and maintained at the desired level by water circulating from a water bath.After extraction, the liquid extract was separated from the seed residue by using a centrifuge at 4000 rpm for 20 min.The solvent was then removed by using a rotary evaporator and the oil obtained were dried until a constant weight was reached.The extracted oil was collected in a pre-weighed 50 ml beaker for the yield calculation. Determination of C. inophyllum Oil Yield Extraction yield of C. inophyllum oil was calculated using Equation ( 2): whereY is the extraction yield of C. inophyllum oil (%), M 1 is the mass of C. inophyllum oil extracted from the sample (g) and M 0 is the mass of the sample used (g). 25The mass of C. inophyllum oil extracted from the sample, M 1 was calculated by the difference between the mass of beaker containing the oil and the mass of the empty beaker used. Experimental Design A four-factor, three-level BBD was employed to determine the optimal conditions for UAE of C. inophyllum oil.In order to evaluate the effect of process variables, 29 experiments including five replicates at the central point were performed randomly.Four independent variables involved in this study were extraction time (X 1 ), ultrasonic power (X 2 ), extraction temperature (X 3 ) and liquid to solid ratio (X 4 ), while the dependent variable was the yield of C. inophyllum oil (Y).The ranges of the independent variables were chosen based on the results of preliminary experiments.All independent variables and their respective levels used in BBD were shown in Table 1. Table 1: Independent variables and their respective coded levels employed in BBD.Liquid to solid ratio, X 4 (ml/g) 15 20 25 Independent variables Levels Each of these independent variables were coded at three levels between -1, 0 and +1.The coding of the variables was done according to the following equation: 26 1, 2,3,..., Where i x is the dimensionless value of an independent variable, X i is the real value of an independent variable, X Z is the real value of an independent variable at the centre point, and i x  is the step change of the real value of the variable i corresponding to a variation of a unit for the dimensionless value of the variable i.The experimental data were analysed by multiple regressions to fit the following quadratic polynomial model: Where Y is the predicted response and β o is an intercept.Β i , β ii and β ij are regression coefficients for linear, quadratic, and interactive terms, respectively.X i and X j are the coded independent variables. 27 Statistical Method The statistical analysis was carried out using Design Expert (Version 6.0.6,Stat-Ease Inc., Minneapolis, Minnesota, USA).Modeling of data started with a quadratic model including linear, squared and interaction terms.The adequacy of the model was determined by evaluating the coefficient of determination (R 2 ), the lack of fit, adequate precision and the F-test value obtained from the analysis of variance (ANOVA) that was generated.The regression coefficients obtained from the model were then used for the statistical calculations to generate response surface plots. 28Additional confirmation experiments were subsequently conducted to verify the validity of the statistical model. Statistical Analysis and Model Fitting The C. inophyllum seeds used in this study have a moisture content of 9.45%.The extraction parameters involved in the UAE of C. inophyllum oil were optimised using the BBD.The experimental design matrices with their respective response of the C. inophyllum oil yield are shown in Table 2. The coefficients with single factor represent the effect of that particular factor towards the C. inophyllum oil yield, while those with second-order terms and two different factors represent the quadratic and interactive effects, respectively.Analysis of variance (ANOVA) were used to test the adequacy and fitness of the models.Table 3 provides the regression coefficient values of equation obtained from the statistical analysis results.The regression model of C. inophyllum oil yield was considered highly significant owing to the values of both F value and p-value, which were 61.54 and < 0.0001, respectively.Meanwhile, the p-value for the lack of fit (0.9566) was higher than 0.05.This shows that it was not significant relative to the pure error and indicates that the fitting model is adequate to describe the experimental data.These two values confirmed the goodness-of-fit and suitability of the regression model.The adequacy of the model was further tested by evaluating the determination coefficient (R 2 ).The determination coefficient (R 2 = 0.9840) indicates that only 1.6% of the total variation are not explained by the model.The value of adjusted determination coefficient ( 2 Adj R = 0.9680) also confirmed that the model was highly significant.At the same time, a relatively low value of coefficient of variation (C.V.% = 0.35) clearly proves that the experimental values of regression model were precise and reliable.Adequate precision is a measure of the range in predicted response relative to its associated error and a value greater than four indicate that the model can be used within the region of operation. 29In this study, an adequate precision value of 27.66 indicated that the model has an adequate signal.In conclusion, the established model is adequate for prediction in the range of experimental variables.The significance of each coefficient measured using F-value and p-value is listed in Table 4.For each terms in the model, a large F-value and a small P-value would imply a more significant effect on the respective response variable. 30All regression coefficients were significant (P < 0.05) towards the response variable except for two interactive coefficients, which were the interaction between extraction time and ultrasonic power (X 1 X 2 ) as well as ultrasonic power and extraction temperature (X 2 X 3 ). Comparison of Experimental and Predicted C. inophyllum Oil Yield A regression model provides the ability to predict future observations on the response (C.inophyllum oil yield) corresponding to particular values of the variables.However, verification of the model is essential to ensure that adequate approximation to the actual values is done.Proceeding without proper analysis and optimisation of the fitted response surface would probably cause disingenuous results. 28Therefore, diagnostic plots such as the experimental versus predicted values shown in Figure 1 were used to judge the model adequacy and display the correlation between experimental and predicted values.Each of the experimental value is compared to the predicted ones computed from the model.The data points on this plot are positioned close to the straight line and signify that there is sufficient agreement between the actual data and the model data.This result implies that the regression model used in this extraction process were able to predict optimum operating conditions for C. inophyllum oil extraction. Figure 1: Comparison between predicted and experimental oil yield. Response Surface Optimisation of the C. inophyllum Oil Extraction Conditions Three-dimensional response surface and two-dimensional contour plots generated by the Design Expert software version 6.0.6 were used to visualise the relationship between independent and dependent variables and the interactions between two variables.Different shapes of the contour plots indicate whether mutual interactions between the independent variables are significant or not.The circular contour plots indicate the negligible interactions between the corresponding variables, while an elliptical contour plots indicate the significant interactions between the corresponding variables. 31The three-dimensional representations of the response surfaces generated by the model are shown in Figure 2, 3, 4 and 5.Among these four variables studied, two variables were kept constant at their respective zero level, when the other two variables within the experimental range were depicted in the three-dimensional surface plots.According to Jovanovic-Malinovska et al., 33 this observation can be well explained by Fick's second law of diffusion, which stated that the final equilibrium between the solute concentrations in the solid matrix (seeds) and in the bulk solution (solvent) will be achieved after certain time.Hence, an excessive extraction time did not lead to enhanced oil yield.Therefore, the final equilibrium between the oil concentration within the seeds and in the n-hexane was achieved at approximately 20 min of extraction time.This result was in agreement with the findings reported by Zhang et al. 34 on the UAE of epimedin A, B, C and icariin from Herba Epimedii.The increasing trend of the oil yield along the increasing temperature (35°C to 40°C) is probably due to the improvement of the mass transfer resulting from the increased solubility of C. inophyllum oil and the decreased viscosity of the solvent. 35On the other hand, the reverse trend could be explained by a combination of acoustic cavitation and thermal effect.The temperature displayed a positive effect on vapour pressure.Therefore, high temperature led to the increase in the vapour pressure of solvent molecules within cavitation micro-bubbles, causing the damping of the bubble collapse and decrease in cavitation intensity. 36Sun et al. 37 reported the same trend in their research on all-trans-β-carotene extraction from citrus peels by using ultrasound treatment. .In contrast, when extraction time and L/S ratio were raised to a higher level, the oil yield did not show any remarkable improvement.A high ratio of liquid to solid material implied greater concentration difference between the interior plant cells and the exterior solvent, and the diffusion of oil occurred more quickly.In this case, increasing L/S ratio from 15 to 20 ml g -1 created a larger concentration difference between the interior seeds and exterior solvent, thus enhancing the oil yield.The oil yield did not improve further when L/S ratio was increased higher from 20 to 35 ml g -1 due to the prolonged distance of diffusion towards the interior tissues. 38u et al. 39 studied UAE of oleanolic and ursolic acids from pomegranate (Punica granatum L.) flowers and they found out that solvent to material ratio of 20 ml g -1 was the best condition for the extraction and a larger ratio did not increase the extraction yield. Figure 4 illustrates the response surface plot for the effects of ultrasonic power and L/S ratio on C. inophyllum oil yield when the extraction time and extraction temperature were held constant at 20 min and 40°C respectively.It can be seen that higher oil yield was reached at an ultrasonic power between 190 W and 210 W and L/S ratio between 15 ml g -1 and 20 ml g -1 .An increase in ultrasound power promotes a more vigorous destruction of the seed's cell walls.The higher the ultrasound power, the more solvent could enter the interior of the cells and the more oil will be released into the solvent, hence improving the extraction efficiency. 40However, beyond 210 W and 20 ml g -1 , the oil yield started to reduce.This observation can be explained by the increase of acoustic intensity with the increasing of ultrasonic power.In this case more bubbles were formed which hampers the propagation of shock waves and the bubbles may coalesce to form bigger ones and implode weakly.Therefore, the extraction efficiency would decrease. 41Sun et al. 42 in their study on the UAE of five isoflavones from Iris tectorum Maxim reported the same trend where the highest extraction yield for all isoflavones were achieved at an ultrasound power of 150 W. The extraction yield decreased when the power was above 150 W. to 40°C and 15 to 20 ml g -1 respectively.The highest oil yield was obtained at approximately 40°C with L/S ratio of 20 ml g -1 .Further variation in the temperature (40°C to 45°C) and L/S ratio (20 to 25 ml g -1 ) however caused a slight decrease in the oil yield. Validation of the Predictive Model The regression model proposed by BBD predicts optimum conditions which gives the highest C. inophyllum oil yield.Optimum conditions identified were as follows: extraction time 20.8 min, ultrasonic power 211.74 W, extraction temperature 41.54°C and L/S ratio of 20.5 ml g -1 .The combination of these extraction conditions were expected to obtain maximum oil yield of 56.2%.For operational convenience, the optimum conditions were 21 min, 210 W, 42°C and 21 ml g -1 for extraction time, ultrasonic power, extraction temperature and L/S ratio, respectively.However, validation of the predicted optimum conditions is required to determine the adequacy and reliability of the model equation.Therefore, five sets of confirmatory experiments were conducted at the suggested optimum extraction conditions.As tabulated in Table 5, the experimental oil yield was 56.03% and it was close to the predicted value (56.2%).Additionally, the percentage error differences between the experimental and predicted values were in the range of 0.30-3.75%(< 5%), thus indicating that the predicted conditions and response were verified for optimising UAE of C. inophyllum oil. As a result, the model developed by BBD was suitable and could be effectively used to optimise the extraction parameters of C. inophyllum oil extraction. CONCLUSION The Box-Behnken response surface design was successfully employed to optimise the UAE of oil from C. inophyllum seeds.Four independent variables such as extraction time, ultrasound power, extraction temperature and L/S ratio significantly affect the C. inophyllum oil yield.The developed model gave a high determination coefficient value (R 2 ) of 0.984, implying a satisfactory fit to the experimental data.The optimum conditions were found to be as follows: extraction time 21 min, ultrasound power 210 W, extraction temperature of 42°C and L/S ratio 21 ml g -1 .Under these optimised conditions, the maximum oil yield observed was 56.03% and it was in good agreement with those predicted by the regression model. Figure 2 : Figure 2: Response surface plot showing the effects of extraction time and extraction temperature on the yield of C. inophyllum oil.The ultrasonic power and L/S ratio were at 210 W and 20 ml g -1 , respectively. Figure 2 Figure 2 illustrates the effects of extraction time and extraction temperature on C. inophyllum oil yield at an ultrasonic power of 210 W and L/S ratio of 20 ml g -1 .Increases in extraction time from 15 min to 20 min and extraction temperature from 35°C to 40°C gradually increased the oil yield and then it began to level off and decrease slightly at elevated temperatures (> 40°C) and longer extraction time (> 20 min).The initial sharp increase in the extraction yield was due to the large oil concentration gradient between the extracting solvent and the seeds and also due to easier extraction of oil from the most outer part of the seeds.As the extraction time proceeded, the concentration gradient decreased; as the mass transfer was increased with continuous exposure to ultrasound, the extraction became difficult due to interior part of the seeds.The continuous increase in the release of the oil resulted in a saturated solvent, leading to a negligible mass transfer and extraction.32 Figure 3 : Figure 3: Response surface plot showing the effects of extraction time and L/S ratio on the yield of C. inophyllum oil.The ultrasonic power and extraction temperature were at 210 W and 40°C, respectively. Figure 3 Figure3shows the effects of extraction time and L/S ratio on C. inophyllum oil yield when the ultrasonic power and extraction temperature were maintained at 210 W and 40°C.The oil yield increased significantly at a lower range of extraction time (15 to 20 min) and L/S ratio (15 to 20 ml g -1 ).In contrast, when extraction time and L/S ratio were raised to a higher level, the oil yield did not show any remarkable improvement.A high ratio of liquid to solid material implied greater concentration difference between the interior plant cells and the exterior solvent, and the diffusion of oil occurred more quickly.In this case, increasing L/S ratio from 15 to 20 ml g -1 created a larger concentration difference between the interior seeds and exterior solvent, thus enhancing the oil yield.The Figure 4 : Figure 4: Response surface plot showing the effects of ultrasonic power and L/S ratio on the yield of C. inophyllum oil.The extraction time and extraction temperature were at 20 min and 40°C, respectively. Figure 5 : Figure 5: Response surface plot showing the effects of extraction temperature and L/S ratio on the yield of C. inophyllum oil.The extraction time and ultrasonic power were at 20 min and 210 W, respectively. Figure 5 Figure5shows the effects of extraction temperature and L/S ratio on C. inophyllum oil yield whereas the extraction time and ultrasonic power were set constant at their respective centre values of 20 min and 210 W. The oil yield increased as the extraction temperature and L/S ratio increases in the range between 35°C to 40°C and 15 to 20 ml g -1 respectively.The highest oil yield was obtained at approximately 40°C with L/S ratio of 20 ml g -1 .Further variation in the temperature (40°C to 45°C) and L/S ratio (20 to 25 ml g -1 ) however caused a slight decrease in the oil yield. Table 2 : Box-Behnken experimental design and results for extraction yield of C. inophyllum oil. (continued on next page) Table 3 : (continued)The resulting oil yield ranged between 52.08% and 56.28%where the maximum oil yield was obtained under the following extraction conditions: 20 min extraction time, 210 W ultrasonic power, 40°C extraction temperature and 20 ml/g L/S ratio.By applying multiple regression analysis on the experimental data, the predicted response variable and the independent variables were found to correlate by a second-order polynomial equation.The equation was expressed in terms of coded factors, described as follows: X 1 = Extraction time, X 2 = Ultrasonic power (W), X 3 = Extraction temperature, X 4 = liquid to solid ratio (ml/g) Table 4 : ANOVA of the regression quadratic model for the prediction of the C. inophyllum oil yield. Table 5 : Estimated coefficients and significance test for linear, quadratic and interactive factors of the regression model. S = Significant, N-S = Non-significant Table 5 : 43perimental and predicted C. inophyllum oil yield under optimum conditions.The efficiency of UAE technique in extracting oil from C. inophyllum seeds was compared to a study reported by Jahirul et al.43They have conducted the extraction of C. inophyllum seed oil by using a conventional solvent (hexane) extraction technique.The highest oil yield obtained was approximately 51% after 8 h of extraction.In addition, they also used mechanical extraction technique (screw press) in order to obtain the oil.However, this technique was less efficient as it took over an hour to process just one sample and the oil yield was low (approximately 25%).In contrast, UAE technique in the present study only required 21 min to give a maximum oil yield of 56%.This shows that the application of ultrasound have successfully reduced the extraction time needed, thus making UAE a more effective and promising technique compared to the conventional extraction technique.
2018-12-30T02:30:31.084Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "908970d75b9a653e63e41c3c77fd19a0b3fe51f8", "oa_license": "CCBY", "oa_url": "http://jps.usm.my/wp-content/uploads/2016/08/JPS-272-8.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "908970d75b9a653e63e41c3c77fd19a0b3fe51f8", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
1698
pes2o/s2orc
v3-fos-license
The Calan-Yale Deep Extragalactic Research (CYDER) Survey: Optical Properties and Deep Spectroscopy of Serendipitous X-ray Sources We present the first results from the Cal\'an-Yale Deep Extragalactic Research (CYDER) survey. The main goal of this survey is to study serendipitous X-ray sources detected by Chandra in an intermediate flux range ($10^{-15}-10^{-12}$ ergs s$^{-1}$) that comprises most of the X-ray background. 267 X-ray sources spread over 5 archived fields were detected. The $\log N-\log S$ distribution obtained for this sample is consistent with the results of other surveys. Deep $V$ and $I$ images were taken of these fields in order to calculate X-ray-to-optical flux ratios. Identifications and redshifts were obtained for 106 sources using optical spectroscopy from 8-m class telescopes to reach the optically faintest sources, to the same level as deeper X-ray fields like the Chandra Deep Fields, showing that the nature of sources detected depends mostly on the optical limit for spectroscopy. In general, sources optically classified as obscured Active Galactic Nuclei (AGNs) have redder optical colors than unobscured AGN. A rough correlation between $f_X/f_{\rm opt}$ and hard X-ray luminosity was found for obscured AGN confirming the prediction by existing models that in obscured AGN the optical light is completely dominated by the host galaxy. The previously claimed decrease of the obscured to unobscured AGN ratio with increasing X-ray luminosity is observed. However, this correlation can be explained as a selection effect caused by the lower optical flux of obscured AGN. Comparison between the observed $N_H$ distribution and predictions by existing models shows that the sample appears complete up to $N_H<3\times 10^{22}$ cm$^{-2}$, while for more obscured sources incompleteness plays an important role in the observed obscured to unobscured AGN ratio. Introduction Wide-area X-ray surveys have played a key role in understanding the nature of the sources that populate the X-ray universe. Early surveys like the Einstein Medium Sensitivity Survey (Gioia et al. 1990), ROSAT (Roentgen Satellite) International X-ray/Optical Survey (Ciliegi et al. 1997) and the ASCA (Advanced Satellite for Cosmology and Astrophysics) Large Sky Survey (Akiyama et al. 2000) showed that the vast majority of the X-ray sources were AGN. In particular, in shallow wide area surveys in the soft (0.5-2 keV) X-ray band, most of the sources detected are unobscured, broad line AGN, which are characterized by a soft X-ray spectrum with a photon index 2 Γ = 1.9 (Nandra & Pounds 1994). More recent, deeper observations, mostly by ROSAT (Hasinger et al. 1998), XMM-Newton and Chandra , that resolved between 70% and 90% of the X-ray background (XRB) showed that the vast majority of this background radiation can be attributed to AGN. However, the spectrum of the XRB is well characterized up to E ∼ 30 keV by a power law with photon index Γ = 1.4 (Gruber et al. 1999), harder than the typical unobscured AGN spectrum (Mushotzky et al. 2000). Given that photoelectric extinction preferentially absorbs soft X-ray photons (Morrison & McCammon 1983), the X-ray spectra of obscured AGN look harder and therefore more compatible with the observed spectral shape of the XRB. Therefore, population synthesis models (Madau et al. 1994;Comastri et al. 1995;Gilli et al. 1999Gilli et al. , 2001) that can explain the spectral shape and normalization of the XRB use a combination of obscured and unobscured AGN as the major contributor. In these models, the ratio of obscured to unobscured AGN is about 4:1 (Gilli et al. 2001) with a redshift peak at z ∼ 1.3. However, recent deep optical spectroscopic follow up in the Chandra Deep Fields (CDF) North and South ) revealed a much lower redshift peak at z ∼ 0.8 and an obscured to unobscured AGN ratio of ∼ 2:1. While large observational efforts have been concentrated in the Chandra Deep Fields, which provide the deepest view of the X-ray Universe (e.g., a flux limit of ≃ 2.5×10 −17 ergs cm −2 s −1 on the CDF-N), the small area covered (≃ 0.07 deg 2 each) does not allow them to obtain a statistically significant number of sources in the intermediate X-ray flux range (10 −15 − 10 −12 ergs cm −2 s −1 ) that contributes ∼ 60-70% of the XRB. Therefore, we obtained identifications and studied the multiwavelength properties of X-ray sources in this flux range over a much larger area. Specifically, in 2001 we started the Calán-Yale Deep Extragalactic Research (CYDER) survey, a multiwavelength study of serendipitous X-ray sources in existing, archived, moderately deep Chandra fields. Initial results from the first two fields studied were presented by Castander et al. (2003b). Also, two high-redshift (z > 4) X-ray selected quasars discovered in this survey, a significant fraction of the total sample known today (∼10), were reported by Castander et al. (2003a) and Treister et al. (2004a). Near infrared images in the J and K bands were obtained for these fields up to J ∼ 21 and K ∼ 20 mags (Vega). The results of combining X-ray/optical and near infrared observations for our sample of serendipitous X-ray sources will be presented in a following paper (F. Castander et al, in prep). In this paper, we present optical photometry for 267 X-ray sources selected in the Chandra total band (0.5-8 keV) in the five fields studied by the CYDER survey. Also, spectroscopic identifications and redshifts for 106 X-ray sources are presented. The sample presented here is comparable in multiwavelength follow-up to deeper, more famous, surveys like the CDFs and the Lockman Hole. Spectroscopic identifications were obtained for sources with relatively faint optical fluxes (V ∼ 24 mag), allowing for a more unbiased study of the Xray population and showing that the statistical properties of the sample depends significantly on the depth of the spectroscopic follow-up. Also, the use of five different fields spread over the sky allows to reduce the effects of cosmic variance, that affected the results of single-field studies, e.g., the presence of clusters in the CDF-S (Gilli et al. 2003). In § 2 we explain the criteria used to select the X-ray fields studied and the procedures followed to reduce the X-ray data and to extract sources lists. In § 3 we describe the optical imaging and spectroscopy observations and the data reduction methods used. In § 4 we present source properties in each wavelength range. Our results are discussed in § 5 and the conclusions outlined in § 6. Throughout this paper we assume H o = 70 kms −1 Mpc −1 , Ω m = 0.3 and Ω Λ = 0.7, consistent with the cosmological parameters reported by Spergel et al. (2003). Field Selection Fields in the CYDER survey were selected based on existing deep Chandra observations available in the public archive before the optical imaging campaign started in late 2000. The fields are observable from the southern hemisphere, are at high galactic latitude (|b| > 40 • ) in order to minimize dust extinction, avoiding targeting known clusters given the difficulties in dealing with a diffuse non-uniform background. Only observations with the Advanced CCD Imaging Spectrometer (ACIS; Garmire et al. 2003) were used. In the case of ACIS-I observations all four CCDs were used while for observations in the ACIS-S mode only the S3 and S4 chips were used in order to keep the off-axis angle small and therefore only use zones with good sensitivity. Fields selected for this study are presented in Table 1. In two of these fields, C2 and D2, the original target of the observation was a galaxy group. In these cases, the galaxy group diffuse emission reduces the sensitivity in the centers of the regions, but not dramatically. Roughly the central 40 ′′ radius region for HCG 62 has substantial gas emission, while the 1 ′ central region was affected by the presence of HCG 90. This accounts for about 2% of the HCG 62 region and a slightly smaller fraction of the HCG 90 region, since the latter was observed with ACIS-I. The effective area of field C5 was set to 0 since the Chandra images of that field were read in subraster mode to include only the central source, and therefore serendipitous sources detected on that field were ignored to compute the log N − log S relation. Some of our fields were also studied by other similar surveys. In particular, the Q2345 and SBS 0335 fields were analyzed by the Chandra Multiwavelength Project (ChaMP; Kim et al. 2004) while the HCG 62 and Q2345 fields were studied by the Serendipitous Extragalactic X-ray Source Identification Program (SEXSI; Harrison et al. 2003). X-ray Data Reduction Reduction of the data included the removal of bad columns and pixels using the guidelines specified on the "ACIS Recipes: Clean the Data" web page and the removal of flaring pixels using the FLAGFLARE routine. We used the full set of standard event grades (0,2,3,4,6) and created two images, one from 0.5 to 2.0 keV and one from 2.0 to 8.0 keV. Then, we used the WAVDETECT routine from the CIAO package to identify the point sources within these images, checking wavelet scales 1,2,4,8 and 16. Sources were extracted independently in the soft (0.5-2.0 keV) and hard (2.0-8.0 keV) band images. The false source detection probability is set to 10 −6 for ACIS-S observations and 10 −7 for ACIS-I observations. This gives a likelihood of ∼ 1 false source detection per field observed. Given the low density of X-ray sources and the good spatial resolution of Chandra, matching sources in the soft and hard bands was straightforward. Where the X-ray spectrum had at least 60 counts, the photons were binned in groups of 20, and the spectrum was fit in XSPEC 11.0, using a model consisting of a power law with the appropriate Galactic absorption value for each field. Where the number of counts was smaller than 60, the same procedure was used, except that the spectral index was fixed to Γ =1.7, consistent with the hardening of the X-ray spectrum with decreasing flux (Giacconi et al. 2001). Optical Imaging Optical images were obtained using the CTIO 4-m Blanco Telescope in Chile with the MOSAIC-II camera (Muller et al. 1998), which has a field of view of 36 ′ × 36 ′ . Details of the optical observations are presented in Table 2. All the fields have optical coverage in the V and I filters in the optical, and were also imaged in the J and K bands in the near infrared, observations that will be reported in a following publication. Reduction of the data was performed using standard procedures included in IRAF v 2.12 3 , in particular in the MSCRED package. The data reduction scheme followed was based on the recipe used by the NOAO Deep Wide Survey 4 . Standard calibration images (bias and dome flats) were obtained each night for every filter used. Super-sky flats were constructed based on several (∼ 20) unregistered frames in each filter, masking real objects in order to obtain a secondary flat field image. Once basic calibrations were performed, individual frames were registered and co-added to obtain the final image in each filter. Astrometric solutions for each final image were calculated based on the USNO catalog. Typical astrometric uncertainties are ∼ 0.3", smaller than the on-axis PSF size of Chandra images, therefore allowing for an accurate match between optical and X-ray data. Objects in the final images were extracted using SExtractor (Bertin & Arnouts 1996). To detect objects we used a threshold of 1.5σ above the background per pixel and a minimum area of 15 pixels (∼ 1.0 arcsec 2 ) above that threshold. In several experiments, this combination of parameters gave a good balance between completeness and false detections, the latter being lower than ∼ 5%, for the range of FWHM of our images. Zero points for each image were obtained independently for each night based on observations of Landolt standard fields (Landolt 1992). Aperture photometry was then performed using a diameter of 1.4 times the FWHM. Magnitudes were later corrected for the (small) effects of Galactic extinction in our high-galactic latitude fields. Limiting magnitude for each image was calculated based on global RMS measurements of the background and reported in Table 2. Objects in the V and I images were matched by position allowing a maximum distance of 1 ′′ between objects in different filters. The typical difference between the V and I counterparts is ∼ 0.5 ′′ , consistent with the previously reported astrometric uncertainties and typical centroid errors, so the choice of 1 ′′ as a threshold provides a good balance in order to avoid spurious matches. If more than one match was found inside that area then the closest match was used, however this only happened in a few cases given the typical sky density of our optical images. V − I color was calculated for sources detected in both bands. Optical Spectroscopy Given that one of the goals of CYDER is the study of the optically faint X-ray population, only 8-m class telescopes were used for the spectroscopic follow-up. Multi-object spectrographs were used in order to improve the efficiency of the observations, including FORS2 at the VLT in MXU mode and the LDSS-2 instrument on the Magellan I (Baade) Telescope. Details of these observations are given in Table 3. Given the space density of X-ray sources at our flux limit and the field of view of the instruments used, typically ∼8 X-ray sources were observed per mask. For the observations with FORS2 at the VLT, the 300V-20 grism was used, which gives a resolution R ∼ 520 (10.5Å) for our 1" slits, with a typical wavelength coverage from 4000 − 9000Å depending on the position of the source in the mask. Observations with LDSS-2 used the Med/Blue grism, giving a dispersion of 5.3Å pixel −1 at a central wavelength of 5500Å and resolution of R ∼350 with our 1.0" wide slits. The typical wavelength coverage with this configuration was ∼ 4000 − 7500Å . Spectral reduction was performed using standard IRAF tasks called by a customized version of the BOGUS code 5 . We calibrated the wavelength of the spectrum using He-Ar comparison lamps and the night-sky lines. In order to flux-calibrate our spectra, ∼ 2 spectrophotometric standards were observed every night. Catalog The full catalog of X-ray sources in the CYDER field is presented in the on-line version of the journal, while for clarity a fraction of the catalog is presented in Table 4. The full catalog is also available on-line at http://www.astro.yale.edu/treister/cyder/. Coordinates are given as measured in the X-ray image, while the offset is calculated with respect to the closest optical counterpart and only reported when this offset is smaller than 2.5" and thus that counterpart was used in the analysis. When a counterpart was not detected in the optical images, the 5σ upper limit in that band is reported. In order to convert count rates into fluxes, the procedure described in section §2.2 was followed. The observed X-ray luminosity was computed only for sources with spectroscopic identification and measured redshift. This luminosity was calculated in the observed frame without accounting for kcorrections or correcting for absorption. Therefore, the simple formula L X = 4πd 2 f X was used. The area covered as a function of limiting flux in the hard X-ray flux band was first estimated individually for each field using the Portable, Interactive Multi-Mission Simulator (PIMMS; Mukai 1993). Given the complexities associated with the modeling of the varying PSF as a function of off-axis angle and the presence of diffuse emission in most of these fields that makes the problem of estimating the completeness levels even harder a constant, higher, flux limit was used for each field. Specifically, for each field we used a fixed value of 2.5 times the flux of the faintest source included in the catalog for each field, in order to be sure that the sample is complete up to that flux. This roughly corresponds to 20 counts detected in the hard band for ACIS-S observations and 10 counts for fields observed with the ACIS-I CCDs. The flux limit in the hard band assumed for each field is shown on Table 1. The resulting total area of the survey is 0.1deg −2 , with a minimum flux limit of 1.3 ×10 −15 ergs cm −2 s −1 (2-8 keV). Figure 1 shows the resulting area versus flux limit curve, in comparison to other surveys like the Great Observatories Origin Deep Survey (GOODS; Giavalisco et al. 2004) and SEXSI (Harrison et al. 2003). The cumulative flux distribution was calculated using where S i is the observed hard X-ray flux of the i-th source and A i is the maximum area over which that source could be detected. The resulting log N − log S relation for the CYDER sample is shown in Figure 2. This curve is consistent with the relation computed by other authors (e.g., Moretti et al. 2003;Ueda et al. 2003). In the lower panel of Fig 2 we show the residuals after comparing with the relation computed by Moretti et al. (2003) using a combination of observational data in both shallow wide-field and deep pencil beam X-ray surveys, showing that the agreement is good. One significant problem with the cumulative log N − log S relation is that the errors in each bin are not independent. The differential flux distribution can be used to avoid this problem. This relation can be expressed as: where in this case the sample was binned with a bin size of 2 × 10 −15 erg cm −2 s −1 . N i is the number of sources in the i-th bin, ∆S i is the size of the bin and A i is the the total area over which sources in this bin could be detected. In Figure 3 the resulting differential log N − log S is shown. These results were compared to the relation reported by Harrison et al. (2003), which fitted the SEXSI data with a broken power-law given by (4) for fainter sources. As it is shown in the lower panel of Figure 3, this parametrization provides a good fit to the CYDER data, even though some scatter is present. A χ 2 test to this fit compared to the observed data gave a reduced χ 2 of 1.37. Optical 207 of the total 267 X-ray sources (77%) were detected in our optical V -band images, searching in a radius of 2.5 ′′ (∼ 4 times the typical seeing) around the centroid of the X-ray emission. The average offset between an X-ray source and the nearest optical counterpart is ∼ 1.3 ′′ with a standard deviation of ∼ 0.5 ′′ . In our optical images we detected typically ∼ 60, 000 sources in 30 ′ × 30 ′ so the chance of having a random source in a 2.5 ′′ radius is ∼ 36% and therefore our choice of 2.5 ′′ as the maximum allowed offset is reasonable to avoid spurious associations. In cases where more than one counterpart was found inside this radius the closest optical source to the X-ray centroid was assumed to be the right counterpart. The V magnitude distribution for the X-ray sources with detected optical counterparts in the CYDER fields is shown on Figure 4. X-ray sources cover the range from V ≃ 16 to 26 mags and higher (fainter than our optical magnitude limit). The hatched histogram in Figure 4 shows the magnitude distribution of sources targeted for spectroscopy, while the cross-hatched histogram shows the distribution for sources with spectroscopic identifications. A K-S test performed comparing the total V magnitude distribution to the magnitude distribution for sources targeted for spectroscopy revealed that the hypothesis that both distributions are drawn from the same parent distribution (the null hypothesis) is accepted at the 98.7% confidence level. However, the effect of the optical flux in the efficiency of spectroscopic identifications can be seen by comparing the magnitude distribution for sources successfully identified and the total sample, namely the incompleteness of the sample with spectroscopic identification at the faint optical flux end is evident in this figure, even though the target selection was independent of the optical properties of the source. This effect is also observed in the I-band ( Figure 5), where 181 X-ray sources counterparts were detected (68%), a lower number than in the deeper V -band images. In this case, the decrease in the efficiency of spectroscopic identifications with decreasing optical flux is also evident in Figure 5. The average V − I color for X-ray sources with optical counterparts is 0.92; its distribution, shown in Figure 6, shows that the efficiency of the spectroscopic identifications is independent of the V −I color of the optical counterpart. A K-S test performed comparing the sample with spectroscopic identification to the total sample show that both distributions are drawn from the same parent distribution with a confidence level of 99.89%. 53 out of 267 (∼ 20%) sources were not detected in any of the two optical bands. The vast majority of these sources are also very faint in X-rays, so that many of them were only detected in the soft band, that is more sensitive in Chandra. This does not imply that they have an intrinsically soft spectrum. In fact, given the known relation between hardness ratio and X-ray flux (Giacconi et al. 2001), it is plausible that these sources are hard and therefore good candidates to be obscured AGN. 5 of these sources were only detected in the hard band, and therefore should have a very hard spectrum in X-rays, that combined with the fact that are very faint in the optical bands makes them good candidates to be obscured AGN at relatively high redshifts. This lack of detection of X-ray faint sources in optical images acts as a selection effect against the study of obscured AGN. However, this bias can be overcome by studying these sources in the near infrared (Gandhi, Crawford, Fabian, & Johnstone 2004), where the effects of dust obscuration are much smaller. In a following paper (F. Castander, in prep) properties of these sources in the near infrared bands will be presented. The V -band magnitude versus redshift plot ( Figure 7) reveals how the source composition changes with redshift, and how it is potentially affected by the implicit optical flux cut for spectroscopy. While at low redshift (z < 1) we find mostly obscured AGN and normal galaxies, characterized by an absolute optical magnitude M V −21, at higher redshift they become too faint in the optical bands and therefore only unobscured AGN (that have mostly M V < −22) can be found. This implies that our survey may be biased against detecting obscured AGN at high redshift. This effect is investigated in more detail in § 5. Correlations Sources with spectroscopic identification were classified using a combination of optical and X-ray criteria. X-ray sources showing stellar spectra were classified as stars. For extragalactic sources, the X-ray luminosity was computed from the observed X-ray flux using the relation where d L is the luminosity distance calculated for the assumed cosmology. This luminosity is therefore the uncorrected, observed frame X-ray luminosity. No attempt was made to correct for dust obscuration or k-corrections given that for most sources the number of observed counts was too small to perform spectral fitting and therefore to calculate the neutral hydrogen column density N H or the intrinsic spectral shape. In order to separate X-ray emission generated by AGN activity from the emission coming from X-ray binaries and star formation in galaxies we used a simple X-ray luminosity threshold criterion. Locally, the most X-ray luminous star forming galaxy known (NGC 3256) has a total X-ray luminosity L X ≃ 8 × 10 41 ergs s −1 in the (0.5-10) keV band (Lira et al. 2002). Another source of luminous X-ray emission is the presence of hot gas in elliptical galaxies, which at low redshift is extended and therefore easily separated from AGN emission; at high redshift it is not resolved and thus harder to separate from AGN activity. However, according to the O' Sullivan et al. (2001) catalog of elliptical galaxies with detected X-ray emission, only a few normal galaxies have L X > 10 42 ergs s −1 . Therefore, we adopted L X = 10 42 ergs s −1 in the total (0.5-8 keV) band as the threshold separating sources dominated by AGN activity from those dominated by star formation or other processes in a galaxy. Given the relatively low number of galaxies found in the survey, we expect this classification method to have a small effect on the total numbers of AGN reported. Objects with a total X-ray luminosity L X < 10 42 ergs s −1 and narrow emission or absorption lines (velocity dispersion less than 1000 km s −1 ) were classified as normal galaxies, while sources with L X > 10 42 ergs s −1 were classified as unobscured (type 1) or obscured (type 2) AGN depending on whether they show broad or narrow lines on their optical spectrum. Furthermore, sources with L X > 10 44 ergs s −1 are called QSO-1, or simply QSO, if they have broad lines or QSO-2 if the lines are narrow, but they are still considered AGN. For sources with spectroscopic identification, Figure 8 shows the V −I color as a function of redshift. X-ray sources identified as Type 1 AGN (broad emission lines) fall near the position expected for QSOs, calculated convolving the optical filters with the Sloan Digital Survey Composite Quasar Spectrum (Vanden Berk et al. 2001). Galaxies/Type 2 AGN, which are only detected up to z ∼ 1.5, are located in the region expected for galaxies ranging from Elliptical to Sb types and have redder colors than Type 1 AGN. The expected colors for each type of galaxy at a given redshift were calculated using the galaxy spectrum models of Fioc & Rocca-Volmerange (1997) assuming that there is no evolution in the spectrum with redshift. From Figure 8 it is clear that objects classified as obscured AGN have redder colors, consistent with those of the host-galaxies. In fact, obscured AGN have an average V −I color of 1.46 with a standard deviation of 0.58, while unobscured AGN have an average color of 0.56 and standard deviation of 0.46 The redshift distribution for sources with spectroscopic identification is shown in Figure 9. When the whole sample of X-ray sources is considered, this distribution has a maximum at very low redshift, z ≃ 0 − 0.6. However, when only sources with L X > 10 42 ergs s −1 (i.e., those dominated by AGN activity) are included the peak is displaced to higher redshifts, namely z ∼ 1. As it is shown by the hatched distribution in Figure 9, the high redshift population (z > 1.3) is completely dominated by broad line AGN (most of them quasars with L X > 10 44 ergs s −1 ). This is explained by the high optical luminosity of these objects, which makes them easier to identify, even at large distances and by the lack of near infrared information at this point, that is very useful to detect obscured AGN, in particular at high redshift (Gandhi, Crawford, Fabian, & Johnstone 2004). In order to investigate possible relations between X-ray and optical emission for different classes of sources, in Figure 10 we plot hard X-ray flux versus V -band magnitude. Most of the sources are located in the region bounded by log f X /f opt = ±1. Starburst galaxies detected in X-rays are typically bright in the optical bands and faint in X-rays, and are therefore characterized by log f X /f opt < −1 (see Hornschemeier et al. (2003) and references therein). Unobscured (type 1) AGN/Quasars are located around the log f X /f opt = 1 position although the scatter is large, while obscured (type 2) AGN/Quasars have in general log f X /f opt > 1 since most of the optical light from the central engine is blocked from our view but low-luminosity examples of obscured AGN can be found also with log f X /f opt ≃ 0 as it will be discussed in §5. Unidentified sources at high f X /f opt are unlikely to be unobscured AGN because their broad emission lines would have been easy to see in the optical spectra, so they are probably obscured AGN. Figure 11 shows the observed-frame hard X-ray (2-8 keV) luminosity versus redshift diagram for sources with spectroscopic redshifts. If we assume a flux limit of 6.9×10 −16 ergs cm −2 s −1 (which would yield a total of 5 counts in the 0.5-8 keV band in a Chandra ACIS-I 60 ks observation for Γ = 1.7), the solid line in Figure 11 shows the detection limit for X-ray sources in our survey. If an optical magnitude of V = 25 mag is taken as the approximate flux limit for spectroscopy (there are fainter sources for which spectroscopy is possible, but the identification relies in the presence of strong emission lines) and a ratio of X-ray to optical emission of f X /f opt = 1 is assumed, then the dashed line in Figure 11 shows our limiting magnitude as a function of redshift for sources with spectroscopy. This explains why incompleteness of the spectroscopic sample is particularly important at high redshift, where the fraction of X-ray sources with spectroscopic identification declines. If the same material that is causing the absorption of X-rays is responsible for the extinction in the optical bands, then a relation between these two quantities can be expected, namely the reddest sources in the optical (higher value of V − I) should also be the X-ray hardest sources. A typical way to quantify the steepness of the X-ray spectrum is using the hardness ratio (HR) defined as where H and S are the count rates in the hard and soft X-ray bands respectively. In Figure 12 the HR versus V − I optical color is presented. In this diagram, no clear relation between HR and optical color is observed. The absence of a correlation can be explained by the differences in the intrinsic optical and X-ray spectrum for different types of sources detected in the X-ray bands, independent of the amount of obscuration present. Note, however, that in general sources optically classified as obscured AGN are redder (larger V −I colors) than unobscured AGN as was previously observed on Figure 8 and also tend to have higher values of HR. This lack of a strong relationship between HR and optical color even for sources classified as AGN-dominated can be explained in part by the effects of K-corrections caused by the different redshifts of the sources and by changes in the intrinsic spectrum with parameters other than obscuration, e.g. luminosity (Ho 1999). Sources without spectroscopic identification (crosses on Figure 12) have in general redder colors than unobscured AGN, that are similar to the colors of spectroscopically confirmed obscured AGN and therefore are consistent with being moderately obscured AGN at relatively high redshift (z 1). Most of these sources however present a soft X-ray spectrum, that can be explained if these sources are at moderately high redshift so that the observed frame Chandra bands traces higher energy emission, that is less affected by absorption. However, this is highly speculative, and the final answer about the nature of these optically faint X-ray sources will come from either deeper optical spectroscopy or from the near infrared data. Identifications Of the 267 X-ray sources detected in the CYDER fields, 106 were identified using optical spectroscopy. While the fraction of sources identified is biased toward higher optical fluxes (see Figures 4 and 5), Figure 6 shows that the optical colors of the sources in the sample both targeted and identified by optical spectroscopy follow a similar distribution as those of the total sample. The redshift distribution of the sample with spectroscopic identification is presented in Figure 9. X-ray sources in this sample span a wide range in redshift, 0 < z < 4.6. The mean redshift for our extragalactic sample with spectroscopic identification is < z >= 1.19 and the peak is located at a low redshift, z ≃ 0.2 − 0.6. When only the sources dominated by AGN activity (i.e., L X > 10 42 ergs s −1 ) are considered, the mean redshift is < z >= 1.34 while the peak is at z ≃ 0.5. For sources optically classified as unobscured AGN, the average redshift is < z >= 1.82 and the peak is at z p = 1.3. Therefore, we conclude that the nature of the identified X-ray sources changes as a function of redshift. At z < 0.3, the sample is dominated by normal galaxies (∼ 60%) and obscured AGN. In the 0.3 < z < 1 region, just a few normal galaxies are found and the population is dominated by obscured AGN (77%), while at z > 1 the vast majority of the sources found are unobscured AGN. The hard X-ray luminosity distribution for the sample of sources with spectroscopic redshift can be seen in Figure 13. In terms of luminosity, the few sources optically classified as galaxies detected in the X-ray sample dominate the low luminosity bins. In the intermediate X-ray luminosity bins (10 42 < L X < 10 44 ergs cm −2 s −1 ), most of the sources are optically classified as obscured AGN, while in the higher luminosity bins (L X > 10 44 ergs cm −2 s −1 ) the vast majority of the sources are optically identified as unobscured AGN. This change of the source type as a function of X-ray luminosity is further investigated in § 5. In our sample there is only one source classified as QSO-2 based on its observed X-ray luminosity and optical spectrum: CXOCY-J125315.2-091424 at z = 1.154, located in the C2 field. 52 counts were detected in the hard X-ray band, while no emission was detected in the soft X-ray band and therefore the hardness ratio is -1. The optical spectrum of this source is presented in Figure 14. Narrow emission lines like CIII, MgII and OII are clearly visible in this spectrum and were used to calculate the redshift of the source. The total (0.5-8 keV) observed X-ray luminosity of this source is L X ≃ 2 × 10 44 ergs s −1 making this the brightest obscured AGN in our sample. Given the observed hardness ratio and redshift of this source, the expected neutral hydrogen column density in the line of sight is 10 23 cm −2 assuming either an intrinsic power law with exponent 1.9 or 1.7, consistent with the optical classification of very obscured AGN. In the total sample with spectroscopic identification 7 sources are classified as stars (6.6%), 11 as normal galaxies (10.4%), 38 are identified as obscured AGN (35.8%), and 50 as unobscured AGN (47.2%). These fractions are similar to the findings of other X-ray surveys, as shown in Table 5. The ChaMP Survey ) covers a total of 14 deg 2 . In their first spectroscopy report, 6 Chandra fields were covered to a depth of r ≃ 21 mag ). In order the compare with their results, we applied our classification scheme to their data. Namely, narrow-line and absorption-line galaxies with L X > 10 42 ergs s −1 were classified as obscured AGN, sources with broad lines as unobscured AGN, while the remaining extragalactic sources were classified as galaxies. The main reason for the discrepancies between their source mix and ours (Table 5) is the optical magnitude cut for spectroscopy, ∼ 2 magnitudes brighter than CYDER, which explains why their sample is clearly dominated by unobscured AGN, the optically brighter X-ray emitting sources. In Table 5, our sample is also compared to both the Chandra Deep Fields North and South , each covering ∼ 0.1 deg −2 . In the first case, we use the spectroscopic follow-up of X-ray sources by Barger et al. (2003), which is 87% complete for sources with R < 24 mag. Here, our classification scheme was applied directly to their data, finding that a low number of unobscured AGN was found, which can be explained by the optical nature of the sources selected for spectroscopic follow-up. Also, a larger number of galaxies relative to other surveys can be seen. This can be explained by the very deep X-ray coverage in the CDF-N, which allows for the detection of a large number of sources with low f X /f opt and high spatial density, like non-active galaxies. In the CDF-S, our results were compared with the spectroscopic identifications of Xray sources from Szokoly et al. (2004). In this case, spectra were obtained for 168 X-ray sources and identifications are 60% complete for sources with R < 24 mag. Compared to the CYDER survey, the source composition is similar, even though a larger number of Xray normal galaxies is found in the CDF-S, as expected given its fainter X-ray sensitivity. However, the fractions of obscured to unobscured AGN are similar (within ∼ 10%) which can be explained by the similarities in the spectroscopic follow-up programs, since both CYDER and CDF-S are ∼ 50% complete for X-ray sources with R < 24 mag. Discussion The CYDER survey is located in an intermediate regime in terms of area coverage and sensitivity. A critical step in understanding the properties of the X-ray population is the existence of extensive follow-up at other wavelengths. In particular, optical spectroscopy plays a key role, allowing us to determine redshifts and to identify the origin of the X-ray emission. Therefore, most X-ray surveys are limited by their ability to obtain spectroscopic identifications for a large fraction of the sources, hopefully without biasing the sample. In the case of the CYDER survey, we used 8m class telescopes in order to extend the spectroscopic coverage to fainter optical magnitudes, namely to R ≃ 24 mag. From Table 5, it is clear that the kind of X-ray sources identified in surveys depends directly on the depth of the optical spectroscopy follow-up. For example, unobscured AGN are bright in the optical bands, therefore in surveys with shallow optical follow-up mostly unobscured AGN are detected (e.g.,ChaMP). On the other hand, deep X-ray coverage, together with an extensive spectroscopy campaign based mostly on the Keck 10-m telescopes, allows the CDF-N to detect more faint optical counterparts. Therefore, the population in very deep surveys is dominated by normal galaxies to the CDF-N depths and obscured AGN in the CDF-S range. In our survey, a total of 50 (47.2%) broad-line AGN were detected. While all of them have a hard X-ray luminosity L X > 10 42 ergs s −1 , two thirds of them have L X > 10 44 ergs s −1 and therefore are classified as quasars. The average redshift for the broad line sample is < z >∼ 1.82, which is much higher than the value found for the remaining X-ray sources. This is clearly explained by the greater optical brightness of unobscured AGN relative to other X-ray emitters. Using a combination of HR and X-ray luminosity together with optical spectroscopy is very useful for classifying X-ray sources ). In Figure 15, the HR versus hard X-ray luminosity diagram is presented. In this case we used a HR=-0.2 in AGNdominated sources rather than the optical spectra to separate obscured and unobscured AGN, which is equivalent to an effective column density N H ≃ 4 × 10 21 cm −2 for spectral index Γ = 1.9 or N H ≃ 3 × 10 21 cm −2 for Γ = 1.7, so this is a conservative cut to the number of obscured AGN. Also, quasar-like sources are distinguished from other X-ray sources using L X > 10 44 ergs s −1 as a dividing line. Except for one source described in § 4.4, all the quasars have broad emission lines in their optical spectrum. Most sources that show broad emission lines have HR< −0.2, meaning that they have little or no absorption in X-rays, consistent with their unabsorbed AGN optical spectrum. For non-AGN dominated X-ray emission, no correlation is found between HR and X-ray luminosity. Also, these sources do not have a characteristic HR value and very hard or soft sources can be found. This X-ray emission is expected to be mostly from high-mass X-ray binaries and type-II supernova remnants in spiral galaxies while for elliptical galaxies the X-ray emission is most likely dominated by hot gas with some contribution from low mass X-ray binaries. Therefore, given the wide range of different X-ray emitter classes together with the lower luminosity, which leads to lower fluxes and therefore larger errors in the HR measurements, can explain why there is no clear correlation between HR and X-ray luminosity and there is no characteristic HR value for low-luminosity, non-AGN X-ray emitters. For AGN-dominated sources, the relation between the f X /f opt ratio and Hard X-ray luminosity (L X ) is investigated in Figure 16. For sources classified optically as unobscured AGN there is no correlation between f X /f opt and X-ray luminosity, while for obscured AGN there is a clear correlation in the sense that obscured sources with lower X-ray luminosity have lower f X /f opt while the hard X-ray sources with large X-ray luminosity also have systematically larger values of f X /f opt . This effect can be explained if the optical light detected in obscured AGN is dominated by the emission from the host galaxy (e.g., Treister et al. 2004b), that is nearly independent from the AGN luminosity. Therefore, for obscured sources that are luminous in X-rays, we can expect a larger f X /f opt ratio, as observed in our sample. Performing a linear fit to the observed sample of sources optically classified as obscured AGN we obtain a correlation at ∼ 2σ significance using the minimum χ 2 test, with best-fit parameters given by This correlation is shown by the solid line in Figure 16. This trend can also be observed at the same significance level if the I-band optical flux is used instead. This can be explained since the V band is bluer and therefore it is more affected by dust obscuration, while in the I band the host galaxy is more luminous, and therefore in both cases the host galaxy emission dominates over the AGN optical radiation. A similar relation between f X /f opt and X-ray luminosity for obscured AGN was found by Fiore et al. (2003) in the High Energy Large Area Survey (HELLAS2XMM). Even though they used the R band to calculate the optical luminosity, the correlations are similar. Given the difficulties in finding obscured AGN at z > 1, we are not able to disentangle a dependence of the obscured to unobscured AGN numbers ratio with redshift from the strong selection effects on the sample. However, from Figure 13 there is some indication that this ratio can depend on the observed X-ray luminosity. In order to investigate this effect in more detail, in Figure 17 the fraction of obscured to all AGN is shown as a function of hard X-ray luminosity combining the hard X-ray sources detected in the CYDER survey with 77 AGN with L X > 10 42 ergs s −1 located in the GOODS-S field with identifications and redshifts reported by Szokoly et al. (2004) in order to increase the number of X-ray sources in each bin. This figure clearly shows that a dependence of the fraction of obscured AGN with X-ray luminosity can be observed. A similar trend was first observed by Lawrence & Elvis (1982) and is consistent with the relation reported by Ueda et al. (2003) and Hasinger (2004). In other to further investigate this observed correlation, and to determine if it can be explained by selection effects, we used the AGN population models of Treister et al. (2004b). Originally used to predict the AGN number counts in any wavelength from far infrared to Xrays, the Treister et al. (2004b) model is based on the Ueda et al. (2003) luminosity function and its luminosity-dependent density evolution in which the intrinsic N H distribution comes from a very simple unified model in which the intrinsic obscured to unobscured AGN ratio is set to be 3:1. The AGN spectral energy distribution is modeled based on three parameters, namely the intrinsic X-ray luminosity of the central engine, the neutral hydrogen column density in the line-of-sight and the redshift of the source in order to compare fluxes in one wavelength to another. Even though this model was applied to the GOODS survey, it can be applied to any other X-ray survey if the proper flux limit and area coverage are used. Given that the luminosity function and AGN SED library in this model are fixed, there is no free parameter to adjust. In Figure 17 we show the predicted correlation between the fraction of obscured to all AGN and hard X-ray luminosity for sources with R 24 mag (i.e., the optical flux limit for spectroscopy) both for intrinsic (dot-dashed line) and observed (i.e., adding the effects of obscuration and K-correction; solid line) X-ray luminosity. In both cases, a decrease in the fraction of obscured to all AGN is observed with increasing luminosity, even when the intrinsic ratio is fixed and set to 3:4. Therefore, this observed correlation can be explained as a selection effect since for obscured AGN their lower optical flux makes them harder to detect in spectroscopic surveys, in particular at higher redshifts where most of the more luminous AGN are located. A crude way to estimate the intrinsic neutral hydrogen column density (N H ) in the line of sight is based on the measured HR. In order to estimate N H , we assumed that the intrinsic X-ray spectrum of an AGN can be described by a power law with photon index Γ = 1.9 (e.g., Nandra & Pounds 1994;Nandra et al. 1997;Mainieri et al. 2002). We then generated a conversion table using XSPEC (Arnaud 1996) to calculate the expected HR for N H in the range 10 20 − 10 24 cm −2 and redshifts from z = 0 to z = 5. The spectral response of the ACIS camera was considered in this calculation. Also, the amount of Galactic absorption in each field as calculated based on the observations by Stark et al. (1992) was added, since all the X-ray emission from extragalactic sources passes through the intergalactic medium of our galaxy. Then, using this conversion table, the observed HR can be translated into a N H value, taking into account the redshift of the source. Even though for individual sources this method to estimate N H may not be very accurate, given the uncertainties associated to perform spectral fitting based on only two bins, these individual uncertainties average out in the distribution. Given that the ACIS camera is more sensitive in the soft X-ray band, we decided to exclude sources not detected in the hard band, in order to use only the sources for which the HR can give a reasonable idea of the X-ray spectrum. For AGN-dominated sources (i.e. L X > 10 42 ergs s −1 ), this choice eliminates 34% of the sources. By cutting the sample to the sources detected in the hard band, a similar fraction of objects optically classified as obscured and unobscured AGN are removed from the sample and therefore we do not expect a significant bias introduced by this choice that on the other hand, allows a more precise statistical analysis, since a definite flux limit can be used. Also, sources dominated by AGN emission in X-ray have hard spectrum so if only sources detected in the hard band are considered, the contamination by non-AGN X-ray emitters is reduced. Therefore even sources detected with high significance in the soft band and not detected in the hard band are removed from the following analysis. The N H distribution for the sources in the reduced sample is presented in Figure 18. While a significant number of sources, 23%, have N H values consistent with no absorption (plotted at N H = 10 20 cm −2 ), some sources present moderate to high levels of absorption, with N H > 10 23 cm −2 (∼ 12%). The N H distribution for the X-ray sources in the GOODS survey (Dickinson & Giavalisco 2002;Giavalisco et al. 2004), which overlaps with the Chandra Deep Fields North and South, was calculated previously following a similar procedure by Treister et al. (2004b). The results of this calculation are also presented in Figure 18 scaled to the number of sources in the CYDER survey. Comparing the results from these two surveys with the predictions for the intrinsic N H distribution based on a simple AGN unified model and the Ueda et al. (2003) luminosity function made by Treister et al. (2004b), we found that the obscuration bias is more important for CYDER than for GOODS, meaning that sources with N H > 3 × 10 22 cm −2 are preferentially missed in the CYDER survey, since obscuration makes them fainter even in the hard X-ray bands. Using the AGN number counts predictions by Treister et al. (2004b) adapted to the CYDER flux limits and area coverage, the observed hard X-ray flux distribution is compared to the predictions by this model (Figure 19). When this sample is compared to the predictions by the Treister et al. (2004b) model the results are very encouraging, showing a very good agreement characterized by a K-S confidence level to accept the null hypothesis of ∼ 96%. Using this model, in Figure 19 the predicted contribution by unobscured (type 1; dashed line) and obscured (type 2; dotted line) AGN are shown. While in the Treister et al. (2004b) model the intrinsic ratio of obscured to unobscured AGN is 3:1 (using N H = 10 22 cm −2 as the dividing point), the prediction for the CYDER X-ray sample is a ratio of 2.35:1 when the survey flux limit in the X-ray bands is considered. This is consistent with the claim that sources with N H > 3 × 10 22 cm −2 are preferentially missed in the CYDER X-ray sample. However, this ratio should be compared to the value of 0.76:1 obtained previously using optical spectroscopy to separate obscured and unobscured AGN. This significant reduction in the relative number of sources classified as obscured AGN can be explained by the optical magnitude cut introduced when optical spectroscopy is used. In the case of the CYDER multiwavelength follow-up, only sources with V < 25 mag have optical spectroscopy, and the completeness level decreases strongly with decreasing optical flux (Figure 4). Since obscured AGN are in general faint optical sources (e.g. Alexander et al. 2001;Koekemoer et al. 2002;Treister et al. 2004b), they are harder to identify using spectroscopy, which cause their relative number to decrease when compared to other X-ray sources that are brighter in the optical bands, like unobscured AGN. Conclusions We presented here the first results from the multiwavelength study of the X-ray sources in the CYDER survey. In this work, we studied the optical and X-ray properties of 267 sources detected in 5 fields observed by Chandra and available in the archive, covering a total of ∼ 0.1 deg −2 and spanning a flux range of 10 −15 − 10 −13 ergs cm −2 s −1 . The X-ray flux distribution of CYDER sources follows a log N − log S relation, both cumulative and differential, that is consistent with the observations in existing X-ray survey. The cumulative log N − log S distribution is consistent with the observations of Ueda et al. (2003), while the differential log N − log S is in good agreement with the distribution derived by the SEXSI survey (Harrison et al. 2003). This implies that there are not significant variations in this sample compared to other existing surveys, and therefore that the results can be directly compared. In general, sources optically classified as obscured AGN have redder optical colors than unobscured AGN and are closer to the colors of normal galaxies, as expected from the unification model of AGN. Also, a correlation between f X /f opt and hard X-ray luminosity is observed in the sample of sources optically classified as obscured AGN. The ratio of obscured AGN seems to be changing as a function of X-ray luminosity, in the sense that for more luminous sources the ratio of obscured to unobscured AGN is lower than for less luminous objects. However, this relation can be explained as a selection effect since obscured AGN are fainter in the optical bands and therefore harder to identify for spectroscopic surveys. In fact, the observed correlation can be reproduced using the Treister et al. (2004b) models that have a fixed intrinsic ratio of 3:4 if an optical cut of R 24 mag (i.e., the magnitude limit for spectroscopy) is used. The N H distribution for sources in the CYDER survey is consistent with the predicted distribution by Treister et al. (2004b) assuming a torus geometry for the obscuring material once selection effects are accounted. This implies that X-ray surveys are subject to significant incompleteness for sources with large amounts of absorption. In the particular case of the CYDER survey this incompleteness is important for sources with N H > 3 ×10 22 cm −2 . How-ever, once these selection effects are accounted for, the observed hard X-ray flux distribution is consistent with the predictions of the models of Treister et al. (2004b). ET would like to thank the support of Fundación Andes, Centro de Astrofisica FONDAP and the Sigma-Xi foundation through a Grant in-aid of Research. This material is based upon work supported by the National Science Foundation under Grant No. AST-0201667, an Astronomy & Astrophysics Postdoctoral Fellowship awarded to E. Gawiser. We thank the anonymous referee for a very careful review and a constructive report that improved the presentation of this paper. We would like to thank the help of Steve Zepf, Rafael Guzman and Maria Teresa Ruiz in the original design of this survey. We also thank the assistance during the observations provided by the staff at Las Campanas Observatory, Cerro Tololo International Observatory and Cerro Paranal. (Harrison et al. 2003) and GOODS . Upper panel shows CYDER data, while the solid line shows the best-fit to the SEXSI counts. In the lower panel, the residuals computed as the ratio between the best-fit curve and the data are shown. Even tough some scatter is present, the fit provides a good description of the distribution of CYDER sources, with a reduced χ 2 of 1.37. Fig. 4.-V magnitude distribution for X-ray sources with detected optical counterparts in the CYDER fields. Hatched histogram shows the magnitude distribution for sources targeted for optical spectroscopy, while the cross-hatched histogram shows the distribution of sources successfully identified. While sources were selected for spectroscopy independent of their optical properties, it is clear that spectroscopic identifications are much more efficient for X-ray sources brighter in the optical. The magnitude distribution for sources targeted for spectroscopy is shown by the hatched histogram, while the distribution for sources successfully identified is shown by the crosshatched histogram. Again, the efficiency of spectroscopic identifications is much higher for sources brighter in the optical bands. Hatched histogram: distribution for sources with L X > 10 42 ergs s −1 (i.e., AGN dominated). In this case the broad peak of the distribution is found at z ∼ 1. where H and S are the hard and soft X-ray band counts respectively, versus V − I color; a source with HR=1 was only detected in the hard band, while one with HR=-1 was only detected in the soft band. Sources with fewer than 50 counts observed in the soft band and not detected in the hard band are not show in this plot. Symbols are the same as in Figure 10. Given the spread in intrinsic V − I color and X-ray spectral shape, a clear correlation between HR and optical color is not observed, however a general trend can be seen in the sense that objects classified as type 2 AGN are redder and have larger HR values, consistent with the presence of obscuration affecting both X-ray and optical emission. Fig. 13.-Hard (2-8 keV) luminosity distribution for the 106 sources with spectroscopic identification. Hatched histogram: luminosity distribution of unobscured AGN. Cross-hatched histogram: luminosity distribution for galaxies (i.e., L X < 10 42 ergs s −1 ). Unobscured AGN dominate the higher luminosity part of the distribution, while obscured AGN are the majority of the sources in the 10 42 < L < 10 44 ergs cm −2 s −1 region. Dashed lines separate galaxies and AGN at L X > 10 42 ergs s −1 and "Quasars" from lower luminosity AGN at L X = 10 44 ergs s −1 . The classification scheme based on the X-ray spectral properties using HR=-0.2 to separate obscured and unobscured AGN for sources with L X > 10 42 ergs s −1 (dotted line) can be compared to the scheme used in this paper based on the optical spectrum and X-ray luminosity, showing that in general obscured AGN have the X-ray hardest spectra. Fig. 16.-Hard X-ray to optical (measured in the observed frame V -band) flux ratio versus hard X-ray luminosity for sources with L X > 10 42 ergs s −1 (i.e., AGN dominated). While sources optically classified as broad line AGN (circles) are scattered over the L X > 10 42 ergs s −1 portion of this diagram, for obscured AGN (narrow emission lines in the spectrum; triangles) we can observe a rough correlation between f X /f opt and L X , namely sources with higher luminosity have larger values of f X /f opt . The solid line shows the minimum χ 2 fit to these data. The existence of this correlation can be explained if most of the optical emission for obscured AGN comes from the host galaxy, which would be roughly independent of the luminosity of the AGN. Fig. 17.-Fraction of objects optically classified as obscured AGN versus total AGN in ∆ log(L X ) = 1.0 bins combining the hard X-ray sources in the CYDER survey with the sources detected in the GOODS-S field with spectroscopic identification from Szokoly et al. (2004) in order to obtain a larger sample. The decrease in the number of obscured AGN with X-ray luminosity can be clearly seen in this figure. (Dot-dashed line) shows the predicted correlation using the models of Treister et al. (2004b) that assumed a constant, fixed, obscured to total AGN ratio of 3:4 (dashed line) if only objects with optical magnitude R 24 mag (i.e., the optical cut for spectroscopy) are considered and the effects of obscuration and k-correction are not taken into account to calculate the X-ray luminosity. Solid line shows the predicted correlation if the intrinsic hard X-ray luminosity in the model is corrected for obscuration and redshift effects. From these results, we can see that the observed correlation can be explained as a selection effect caused by the need for spectroscopic identification in order to calculate luminosities. Fig. 18.-Neutral hydrogen column density (N H ) distribution deduced for X-ray sources with measured spectroscopic redshift and detected in the hard band. The value of N H was calculated from the HR assuming an intrinsic power-law spectrum with exponent Γ = 1.9 (solid line), typical for AGN activity, and the spectral response of the ACIS camera. The redshift of the sources was taken into account to calculate the intrinsic amount of absorption in the X-ray spectrum. The N H distribution for sources in the GOODS survey (dashed line), as calculated by Treister et al. (2004b) shows that in CYDER absorbed X-ray sources with N H > 3 × 10 22 cm −2 are preferentially missed, but they appear in deeper surveys. Dotdashed line: Models from Treister et al. (2004b) adapted to the CYDER total area and flux limits. While the general agreement between can be considered good, a clear disagreement at the high N H end can be observed. This effect was also reported by Treister et al. (2004b) based on the GOODS data and can be explained by the incompleteness in the sample with spectroscopic redshifts (required to calculate N H ) since highly obscured sources are also the faintest in the optical bands. The disagreement in the low N H end is caused by the presence of sources with a soft-excess in the observed sample. Fig. 19.-Hard X-ray (2-8 keV) flux distribution for sources detected in the CYDER fields (heavy solid line) and predicted using the simple unified models of Treister et al. (2004b); solid line. Predicted contributions by unobscured (type 1) AGN (dashed line) and obscured (type 2) AGN (dotted line) are also shown. The agreement between the predicted and observed distributions is good, with a K-S confidence level to accept the null hypothesis of ∼ 96%.
2014-10-01T00:00:00.000Z
2004-11-12T00:00:00.000
{ "year": 2004, "sha1": "3113f12d8d48d89cb1708391449a3c75cd2d4cae", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0411325v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "13b0139dc378121f5d481a8352203fca13fe55c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225357472
pes2o/s2orc
v3-fos-license
Prospective study on assessment of efficacy of etoricoxib versus etoricoxib with combination of glucosamine and chondroitin in primary osteoarthritis of knee Introduction: In the elderly, one of the most common joint diseases is osteoarthritis (OA). Established risk factors include obesity, increasing age, female sex, knee joint injury and meniscectomy. NSAIDS including selective COX – 2 inhibitors have come to play an important role in the pharmacologic management of arthritis and pain. Etoricoxib has been found to be less harmful to the gastrointestinal, renovascular and cardiovascular system. Materials and Methods: Demographic data was collected in detail from all the patients. The site and severity of pain was assessed using the Lequesne Index. The subjects were randomly divided into 2 groups. The first group GROUP – A was treated with Etoricoxib [90 mg OD for 14 days] and the second group GROUP – B was treated with Etoricoxib [14 day] and combination of glucosamine and chondroitin [1 month]. The severity of the pain was noted from all the patients before treatment, during 1st follow up after 1 month, and the 2nd follow up after 2 months. Results: No patient in both A and B group were in the extremely severe category, while 80% in group A and 60% in Group B were in the moderate to severe category. 20% in Group A and 40% in Group B were in the mild to moderate category after 4 weeks of treatment. After 8 weeks of treatment, 10% in group A and 5% in group B were in the moderate to severe category and 55% in Group A and 28% in Group B were in the mild to moderate category while 35% in Group A and 67% in Group B were under satisfactory joint function category. Conclusion: Etoricoxib 90 mg OD for 3 weeks plus combination of glucosamine and chondroitin for 4 weeks is more effective when compared to Etoricoxib as single therapy. © 2020 Published by Innovative Publication. This is an open access article under the CC BY-NC license (https://creativecommons.org/licenses/by-nc/4.0/) Introduction In the elderly, one of the most common joint diseases is osteoarthritis (OA). Osteoarthritis is characterized by the breakdown of cartilage in joints. The most commonly affected joints are the knees, hips, hand, spine and shoulder joints. As the disease progresses, there is a direct effect on the quality of life of the patient, which includes functional as well as social activities, body image and also their emotional well being. 1 As cartilage deteriorates, the bones of the joint begin to rub against one another, causing stiffness and pain, which often impairs movement. Osteoarthritis also can damage ligaments, menisci, and muscles. Bone or cartilage fragments may float in the joint space, causing irritation and pain. Bone spurs, or osteophytes, may also develop, causing additional pain and potentially damaging surrounding tissues. 2 Around the world, an estimated 10%-15% of adults over 60 have some degree of osteoarthritis. Osteoarthritis is the second most common rheumatologic problem and it is the most frequent joint disease with a prevalence of 22% to 39% in India. OA is more common in women than men, but the prevalence increases dramatically with age. Nearly, 45% of women over the age of 65 years have symptoms while radiological evidence is found in 70% of those over 65 years. OA of the knee is a major cause of mobility impairment, particularly among females. OA was estimated to be the 10th leading cause of nonfatal burden. 3 The prevalence of knee OA is rising in parallel with population ageing 4,5 making the search for interventions to reduce disease occurrence and progression even more pressing. The aetiology of the disorder is likely to depend in part on mechanical insults to the joint and in part on a generalized predisposition to OA. 6 Established risk factors include obesity, increasing age, female sex, knee joint injury and menisectomy. 7 Additionally, a significant body of evidence has accrued suggesting that occupational mechanical loading of the knee joint can cause or aggravate the disease. 8 Management of the knee with osteoarthritis is usually with medications and a few lifestyle changes. The most commonly prescribed drugs are non-steroidal antiinflammatory drugs (NSAIDs), but these normally come with the risk of heart attack and other vascular effects, kidney problems and gastrointestinal disturbances. 9,10 Thus, there is an increasing use of the second line of drugs, those which are symptomatic slow acting, like glucosamine hydrochloride, glucosamine sulfate, chondroitin sulfate, hyaluronic acid etc. They are supposed to reduce the cartilage degradation improving the patient symptoms. 11,12 Etoricoxib is a selective COX-2 inhibitor used to relieve pain and swelling in Osteoarthritis. It also exhibits gastrointestinal safety. Oral Etoricoxib is rapidly and completely absorbed from GI tract. Absorption is slowed, but not diminished, following a high-fat meal meaning that Etoricoxib can be administered without dietary consideration. 13 Glucosamine stimulates the production of cartilage, which leads to joint repair, for its use as a symptom modifying drug in knee OA. Chondroitin stimulates the proteoglycans and hyaluronic acid. It decreases the catabolic activity of chondrocytes and inhibits proteolytic enzymes. NSAIDS including selective COX -2 inhibitors have come to play an important role in the pharmacologic management of arthritis and pain. Etoricoxib has been found to be less harmful to the gastrointestinal, renovascular and cardiovascular system. The available data suggests that Etoricoxib is an efficacious alternative in the management of arthritis and pain with potential advantages of convenient once daily administration and superior gastrointestinal tolerability compared with traditional NSAIDS. Hence, the aim of this study is to assess the efficacy of Etoricoxib versus Etoricoxib with combination of glucosamine and chondroitin in primary osteoarthritis of knee in adults. Materials and Methods This community based prospective comparative study was done in the Department of orthopaedics at Malla Reddy Hospital, Suraram, Rangareddy Dist. from January 2019 to march 2019 over a period of 6 months. 128 patients over the age of 40 years with primary osteoarthritis of Grade 0, Grade1, Grade 2 and Grade 3 were included in the study. This study was cleared by the institutional ethical committee and the nature of the study was explained to the patients and their relatives and informed consent was taken from all them. Pregnant and lactating women, Patients with Uncontrolled Hypertension and uncontrolled diabetes, Patients with previous history of coronary artery disease,cardiac arrest and coronary artery bypass surgery, Patients who have a history of Cardiac, Respiratory, Hepatic, Renal disorders and Neoplastic conditions and Patients who are allergic to NSAIDS were excluded from the study. Demographic data was collected in detail from all the patients. The site and severity of pain was assessed using the Lequesne Index. The subjects were randomly divided into 2 groups of 64 each. The first group -GROUP -A was treated with Etoricoxib [90 mg OD for 14 days] and the second group -GROUP -B was treated with Etoricoxib [14 day] and combination of glucosamine and chondroitin [1 month]. The severity of the pain was noted from all the patients before treatment, during 1 st follow up after 1 month and the 2 nd follow up after 2 months. Counselling was done to all the patients Statistical analysis was done by chi square test, graphs and percentages. Results Out of 128 patients included in the study, the number of females were 100 (78%) and males were 28 (22%), showing that the females were more prone to osteoarthritis than males (Figure 1). Out of (n=128) patients, maximum number of patients with osteoarthritis were in the age group of 40-50(51.6% in group-A and 51.6% in group-B) and 51-60(29.7% in group-A and 31.3% in group-B) followed by age group of 61-70(15.6% in both groups) and least in 71-80(3.1% in group-A and 1.6% in group-B) ( Table 1). 101 (79%) of the patients were from the rural background while 27 (21%) were from the urban background. Maximum number were house wives (29 in group A and 28in group B) followed by farmer (16 in group A and 15 in group B), wage workers (11 in group A and 15 in group B) and Most of the patients had a grade 3 severity of osteoarthritis by Kellegren and Lawrence Scale of Grading. 64.3% of the males and 51% of the females had Grade 3, 32.1% of the males and 39% of the females had Grade 2. Grade 1 was seen in 1(3.6%) of the males in the study and in 10(10%) of the females (Table 3). (Table 4). Discussion There have been a few studies which have shown that pain due to osteoarthritis can be treated with the use of COX-2 inhibitors. [14][15][16][17] In the present study we compared the efficacy of Etoricoxib alone and Etoricoxib in combination with Glucosamine and Chondroitin. In our study, the prevalence rate of osteoarthritis was more in females (78% females compared to 22% males)showing that females are more prone to osteoarthritis than males with the most predominant age group being 40-50 years. Most of the patients were from the rural background and housewives According to Lequesne index 1 st follow up represents that among Group A subjects 0% were under extremely severe category, 80% were under moderate to severe category, 20% were under mild to moderate category and 0% were under satisfactory joint function category. Among Group B subjects 0% were under extremely severe category, 60% were under moderate to severe category, 40% were under mild to moderate category and 0% were under satisfactory joint function category. After 8 weeks of treatment during 2 nd follow up, among Group A subjects, 0% were under extremely severe category, 10% were under moderate to severe category, 55% were under mild to moderate category and 35% were under satisfactory joint Zeng et al in their study have reported that glucosamine with chondroitin was more effective for pain relief and functional improvement of the individual. Further, they also observed that there were no adverse effects found. 18 Glucosamine is useful in the anti-inflammatory effect of the body, pro anabolic effect in the promotion of osteoblast proliferation as well as in the inhibition of catabolic intermediates. [19][20][21] Chondroitin is a glycosaminoglycan which is a part of the aggrecan structure of the articular cartilage. There uses are varied, they are anabolic, anti-inflammatory, antiapoptotic, anti-catabolic and antioxidant. 20,21 In studies by Chan et al, anti-catabolic and antiinflammatory action was found with glucosamine with chondroitin rather than either of them alone. [22][23][24] A study by Kongtharvonskul et al also reported that the presence of glucosamine in the treatment of osteoarthritis helped in relieving pain faster. 25 In some cases,both glucosamine and chondroitin are recommended for the treatment of osteoarthritis. [26][27][28] Conclusion Etoricoxib is a COX -2 inhibitor used in relieving severity of pain in Osteoarthritis. Glucosamine and Chondroitin are given as nutritional supplements for improving cartilage function and joint movements. Therefore, they provide additional benefit to COX -2 inhibitors in relieving pain. This study concludes that Etoricoxib 90 mg OD for 3 weeks plus combination of glucosamine and chondroitin for 4 weeks is more effective when compared to Etoricoxib as single therapy. Source of Funding None. Conflict of Interest None.
2020-07-23T09:09:34.260Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "a5646d3171ec74bec08f31524338c2d22baba414", "oa_license": null, "oa_url": "https://doi.org/10.18231/j.ijor.2020.003", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "51a68ccbad6ccfbb829a236e1c6aee8a158713f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11231430
pes2o/s2orc
v3-fos-license
Synchronous multicentric small hepatocellular carcinomas: defining the capsule on high-frequency intraoperative ultrasonography with pathologic correlation Purpose The aim of this study was to define the capsules of synchronous multicentric small hepatocellular carcinomas (HCCs) with use of high-frequency intraoperative ultrasonography (IOUS). Methods Among the 131 consecutive patients undergoing hepatic resection and high-frequency IOUS for HCC, 16 synchronous multicentric small HCCs in 13 patients were histologically diagnosed in the resected specimens. High-frequency IOUS and pathologic findings of these lesions were compared, with particular focus on the presence and appearance of the capsule in or around each lesion. Results Synchronous multicentric small HCCs were pathologically classified into distinctly nodular (n=12) or vaguely nodular (n=4) types. All 12 distinctly nodular HCCs including six subcentimeter lesions showed detectable capsules on high-frequency IOUS and pathology. The capsules appeared as a hypoechoic rim containing hyperechoic foci (n=6), hypoechoic rim (n=5), or hyperechoic rim (n=1) with varying degrees of coverage around each lesion. Histologically, the capsules were composed of a combination of one to four layers consisting of a fibrous capsule, peritumoral fibrosis, prominent small vessels, and entrapped hepatic parenchyma. Conclusion Synchronous multicentric small HCCs with distinctly nodular type, even at subcentimeter size, can show capsules with varying coverage and diverse echogenicity on high-frequency IOUS. Introduction During hepatic resection for hepatocellular carcinoma (HCC), intraoperative ultrasonography (IOUS) screening detects new nodules in 13.1% to 30% of patients [1,2]. The differential diagnosis of these Ultrasonography 35(4), October 2016 e-ultrasonography.org new nodules is particularly difficult in the cirrhotic liver, where regenerative nodules can be almost the same size as small HCCs. HCC frequently shows a nodular appearance with a fibrous capsule (FC), whose presence is regarded as a helpful diagnostic clue on imaging, particularly ultrasonographic diagnosis, of HCC. Capsules are exceptional in small HCCs less than 1.5-2.0 cm in diameter, many of which are not growing expansively and show a vaguely nodular appearance [3,4]. However, a small HCC of the distinctly nodular type frequently presented as a clear nodule with a FC on pathology [3]. Meanwhile, early recurrence of HCC within 1 year after curative resection appears to arise from intrahepatic metastasis or a synchronous multicentric small HCC [3]. Discrimination between intrahepatic metastasis and synchronous multicentric occurrence is difficult, but it is important for clinical management and posttherapeutic prognosis. It has been reported that the prognosis for patients with synchronous multicentric recurrence after curative resection is significantly better than that for patients with recurrence due to intrahepatic metastasis [5]. Detection and characterization of synchronous multicentric small HCCs in the operative field is critical for curative resection and the post-therapeutic prognosis of HCC. To our knowledge, there have been no reports on highfrequency IOUS (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17) findings with pathologic correlation of the capsule in synchronous multicentric small HCCs detected by IOUS during operation on HCC. Through our early high-frequency IOUS experiences, we have recognized that the presence of a sonographically detectable capsule, even in a small HCC with a diameter of less than 1 cm, is a helpful feature differentiating HCC from cirrhosis-related benign nodules. This study aimed to evaluate the high-frequency IOUS findings focused on the capsules of synchronous multicentric small HCCs. We addressed the following two issues: (1) whether high-frequency IOUS can detect the capsule of a synchronous multicentric small HCC and (2) high-frequency IOUS findings of the capsule with pathologic correlation. Patient Populations and Nodule Selection The local Institutional Review Board approved the study protocol. Between January 2007 and December 2012, 131 patients underwent hepatic resection for HCC at our institution. From January 2007 to December 2008, high-frequency IOUS and pathologic findings in resected liver specimens were retrospectively reviewed. Only new nodules confirmed by excision and included in the resected specimen were investigated. New nodules were defined as nodules that were not detected by any preoperative diagnostic imaging modalities and were newly found on high-frequency IOUS. From January 2009 to December 2012, high-frequency IOUS and pathologic findings of new nodules in resected hepatic specimens for direct pathologic correlation were prospectively analyzed. For direct correlation between new nodules detected by high-frequency IOUS and on pathologic specimens, ultrasonography (US)-guided hookwire localization for the nodules within the resected specimen was done using a 9 cm 20-gauge Kopans hookwire needle. When more than five nodules were detected on IOUS, US-guided hookwire localization was performed for one to four dominant nodules. Dominant nodules were defined as nodules that were the largest, had suspicious capsules, or showed a mosaic pattern within the nodule. Seventy-nine of 131 patients were excluded based on the following: no newly detected nodules (n=59), incomplete records of high-frequency IOUS and pathologic findings (n=9), radiofrequency ablations performed (n=6), or only biopsies performed for new nodules detected by high-frequency IOUS (n=5). Of these 131 patients, 78 nodules in 52 patients met the study-inclusion criteria and were included. In order to focus on synchronous multicentric small HCC and exclude intrahepatic metastasis, the pathologic diagnosis of new nodules was based on pathologic findings of synchronous multicentric small HCC, using the Liver Cancer Study Group of Japan morphologic criteria for synchronous multicentric small HCC as follows: well-differentiated HCC and dysplastic nodule containing well-differentiated HCC foci, or well-differentiated HCC containing moderately-or poorly-differentiated cancerous tissues that are considered to have originated and proliferated in situ [6]. The cellular differentiation of HCC was graded as well-, moderately-, or poorlydifferentiated on the basis of the International Working Party criteria [7]. Of these 78 nodules, 62 were pathologically excluded based on the following criteria: regenerative nodules (n=35), suspected intrahepatic metastases (n=17), dysplastic nodules (n=6), bile duct adenoma (n=2), focal necrosis (n=1), and sclerosing hemangioma (n=1). The study group consisted of 16 nodules in 13 patients diagnosed as HCCs with synchronous multicentric development (Fig. 1). The study cohort consisted of 12 males and one female. The mean age was 59 years (range, 39 to 75 years). The size of primary HCCs in 13 patients ranged from 1.7 to 6.0 cm (mean diameter, 3.1 cm). Ten patients had mixed macronodular and micronodular cirrhosis secondary to the hepatitis B virus. Two patients had mixed macronodular and micronodular cirrhosis due to hepatitis C. One patient had mixed macronodular and micronodular alcoholic cirrhosis. Thirteen patients with mixed macronodular and micronodular cirrhosis were Child-Pugh class A (n=10) and B (n=3). Preoperative Imaging and Surgery Preoperatively, all 13 patients underwent dynamic computed tomography (CT). CT exams were performed by one of two 16or 64-detector CT scanners (Light Speed, GE Medical Systems, Milwaukee, WI, USA). For the contrast-enhanced portion of the examination, the patients received approximately 80-130 mL of iohexol (Omnipaque 300, GE Healthcare, Princeton, NJ, USA) or iodixanol (Visipaque 320, GE Healthcare) intravenously by means of a mechanical power injector (Stellant Injector System, Medrad Inc., Warrendale, PA, USA) administered at a rate of 3-4 mL/sec, followed by a 15 to 20 mL saline flush. The standard protocol for triphasic CT consisted of an unenhanced, arterial phase with a scanning delay of 30-40 seconds and a portal phase with a scanning delay of 60-80 seconds. Magnetic resonance imaging (MRI) was performed in 11 of 13 patients. MRI studies were performed on a 3.0T MR unit (Intera Achieva 3T, Philips Medical Systems, Best, Netherlands) with use of the body coil for transmission and reception of the signal. The pulse sequences used were calibrated to obtain magnetic resonance (MR) images of the liver with optimal anatomic resolution. Standard technique was used for each scan, T1-weighted precontrast and postcontrast Turbo FLASH with coronal oblique and axial scans and fat-saturated T2-weighted axial scans. All MR studies included dynamic studies with the administration of extracellular gadolinium contrast agent. The surgical procedures were performed by two surgeons with more than 10 years of experience in hepatic surgery. Procedures included right hepatectomy, left hepatectomy, extended resection, segmentectomy, and tumorectomy. If new lesions discovered at high-frequency IOUS were confined to the lobe or hepatic segment designated for resection, the planned surgical procedure was not changed. If new lesions were discovered in a lobe or segment other than that designated for resection, either tumorectomy or intraoperative radiofrequency ablation were performed on the basis of the pathologic result of an intraoperative biopsy. High-Frequency IOUS and Pathologic Correlation The examination protocol was standardized as follows. After complete hepatic mobilization by dissecting hepatic attachments and careful palpation of the liver, the liver was thoroughly scanned in a consistent order: (1) tracing all hepatic veins and their tributaries, taking care not to overlook the short hepatic veins, (2) tracing all portal venous branches, and (3) examining the liver parenchyma for a primary lesion and for new nodules by means of systematic and repeated longitudinal and transverse scanning. This scanning sequence was briefly repeated for resected specimens. A subspecialty-trained gastrointestinal radiologist (J.H.A.) with more than 10 years' experience performed all IOUS examinations. A high-resolution US system (iU 22 or HDI 5000, Philips Medical After hepatic resection, we performed US-guided hookwire localization for the detected nodules within the resected specimen. Pathologic correlation was made by US-guided hookwire localization using a 9 cm 20-gauge Kopans hook needle for nodules within the resected specimen. The present investigation was a direct analysis of high-frequency IOUS and pathologic findings (n=15) rather than a retrospective review of high-frequency IOUS and pathology reports (n=1). All resected specimens were fixed in formalin, sectioned based on US-guided hookwire localization, stained with hematoxylin and eosin, and examined by a pathologist according to the criteria of the International Working Party [7]. Data Analyses High-frequency IOUS findings and pathologic findings of 16 synchronous multicentric small HCCs, focused on the capsule, were analyzed by two subspecialty-trained gastrointestinal radiologists (J.H.A. and S.M.J.) with more than 10 years of experience and a pathologist (D.-W.E.) with 9 years of experience who paid special attention to the tumor capsule. The pathologist was not blinded to high-frequency IOUS results and was directed to assess what was seen pathologically in patients with a capsule appearing on high-frequency IOUS. Analysis included the following: (1) the size of synchronous multicentric small HCC nodules measured; (2) the pathologic classification of synchronous multicentric small HCC nodules by type as vaguely nodular or distinctly nodular [3,4]; (3) the histological grade of tumors; (4) when the capsules were present on high-frequency IOUS and the pathologic specimen, the frequency and thickness of capsules; (5) whether high-frequency IOUS could detect the capsule of a synchronous multicentric small HCC; and (6) high-frequency IOUS and pathologic findings of the capsules. High-frequency IOUS findings of capsules were classified into the following three types on the basis of the difference in echogenicity among capsules, the lesion, and the surrounding liver parenchyma: hypoechoic rim (echogenicity of the tumor capsule lower than that of the lesion and surrounding liver parenchyma); hyperechoic rim (echogenicity of the capsule higher than the lesion); and hypoechoic rim containing hyperechoic foci. The degree of coverage by the capsule on high-frequency IOUS was also analyzed. The thickness of capsules was recorded at the thickest point of the capsule for both high-frequency IOUS and pathological analyses. Discussion In the setting of liver cirrhosis, many studies using morphologic examination of resected and biopsy specimens of small HCCs, and follow-up studies after curative treatment of HCC cases, have Table 1, a photomicrograph shows that the capsule (arrows) consists of a single layer of fibrous capsule (FC) (H&E, ×100). B. In case 4 in Table 1, a photomicrograph shows that the capsule (arrows) consists of a single layer of peritumoral fibrosis (PF) (H&E, ×100). C. In case 3 in Table 1, a photomicrograph shows that the capsule (arrows) is composed of three layers of FC, PF, and entrapped hepatic parenchyma (EP) (H&E, ×100). D. In case 10 in Table 1, a photomicrograph shows that the capsule (arrows) is composed of three layers of FC, PF, and prominent small vessels (S) (H&E, ×100). T, tumor; N or NT, nontumorous liver tissue. C D A B suggested that many HCCs are multicentric in origin. The reported frequency of synchronous multicentric HCCs in surgically resected cases ranges from 15% to 30% [8,9]. Previous reports have indicated that the recurrence rate after resection of HCC is 20%-40% within a year and about 80% in 5 years [10][11][12], and early recurrence appears to arise from intrahepatic metastases and missed early-stage HCC with synchronous multicentric occurrence. With the aim of reducing the early recurrence of HCC, the detection of synchronous multicentric small HCC during surgery for HCC using IOUS that provides high spatial resolution without interference from the surrounding structures is clinically important [13]. The present study focused on synchronous multicentric small HCCs detected during surgery for HCC. We demonstrated that synchronous multicentric small HCCs with a distinctly nodular type, even at subcentimeter size, had detectable capsules on highfrequency IOUS. We also found that most of the capsule could be identified by the hypoechoic rim with or without hyperechoic foci. Regenerative nodules are surrounded by thin fibrous septa. However, distinctly nodular HCCs have relatively thick capsules. Unlike the thin fibrous septa of regenerative nodules, we found that Table 1). A. High-frequency IOUS shows a hyperechoic hepatocellular carcinoma (HCC) with detectable hypoechoic rim (arrowhead) containing hyperechoic foci (arrow). B. Photograph of a resected specimen shows a distinctly nodular HCC. C. In photomicrographs of the tumor (T) (H&E, ×40) and the capsule in an anatomic location corresponding to the arrowheads in A, B, and C, the hypoechoic area of the capsule on highfrequency IOUS is composed of peritumoral fibrosis (Fig. 4B). C B A Ultrasonography 35(4), October 2016 e-ultrasonography.org capsules of distinctly nodular HCCs consist of a FC, PF, small vessels, and EP. Our five distinctly nodular HCCs had capsules greater than 1 mm in thickness. These capsules contained the entrapped parenchymal layer; this layer may be associated with the evolution of the regenerative nodule to early and subsequently advanced HCC. To our knowledge, this is the first report of high-frequency IOUS findings focusing on the capsule of synchronous multicentric small HCCs. Small HCCs of the distinctly nodular type show expansive growth, and many are encapsulated [3]. The main mechanism of capsule formation is thought to be the condensation of the fibrous elements of the surrounding noncancerous liver tissue due to the mechanical pressure of expansive tumor growth [3,4]. The distinct subnodular HCC within regenerative or dysplastic nodules as a nodule-innodule appearance gradually increases in size and eventually replaces the maternal regenerative or dysplastic nodule, and thus the capsule may contain entrapped parenchyma. The subnodular HCCs within regenerative or dysplastic nodules can show different tumor differentiations, and thus variations in the rate of expansive growth. The difference in growth rates of subnodular HCCs within a regenerative nodule or dysplastic nodule may reflect the various degrees of capsule coverage [3]. Therefore, the various degrees of capsule coverage in our cases may be associated with the evolution of a regenerative nodule to early and advanced HCC. Histologically, capsule formation was confirmed in about 53% of distinctly nodular HCCs, even in relatively small tumors of less than 2 cm in diameter [3]. Fibrous capsules with two layers were reported: an inner layer rich in a fibrous component and an outer layer containing various numbers of small vessels and newly formed bile ducts [14]. A report demonstrated that some HCCs showing an enhanced rim on dynamic MRI do not actually have a true FC histologically [15]. In these cases, the pseudocapsule seen on MRI represents prominent hepatic sinusoids and/or PF. In our cases, the outer layer and prominent sinusoids were not prominent, which likely reflected poor development of tumor vascularity and draining veins as a small, early-stage HCC. The bright-loop appearance as a sonographic finding for HCC with a hyperechoic rim in the late stages of dedifferentiation of well-differentiated HCC has been demonstrated [16]. The bright-loop appearance represented the fatty change of well-differentiated HCC containing low echoic HCC with moderate differentiation. However, the main histological finding of hyperechoic areas in the six HCCs with a hypoechoic rim containing hyperechoic foci and one HCC with hyperechoic rim among our cases was entrapped parenchyma. This study has several strengths. The present investigation was a direct analysis of high-frequency IOUS findings and pathologic specimens rather than a retrospective review of high-frequency IOUS and pathologic reports; pathologic correlation was made using US-guided hookwire localization for nodules of interest within the resected specimens. Although the focus on synchronous multicentric small HCC has both merits and demerits, there was a selection bias. The reported incidence of capsules in small distinctly nodular HCCs is about 53% of 80 cases. In this series, high-frequency IOUS detected all capsules of 12 synchronous multicentric small HCCs with distinctly nodular type. These results may be considered as another selection bias, occurring because we defined dominant nodules as nodules that were the largest, had suspicious capsules, or showed a mosaic pattern within the nodule. There are also several limitations to this study. First, our sample number was small. Second, results were descriptive rather than analytic because of the small sample size. It is unclear whether the histological finding of entrapped parenchyma represents dysplastic nodules. Some EP showed increased cellular density. Third, MRI with extracellular contrast agent was only used for preoperative evaluation. This evidences a selection bias and methodological limitation because MRI performed with the hepatobiliary phase agent gadoxetate had high per-lesion sensitivity for lesions ≤20 mm [17]. Finally, the logical next step may be to evaluate whether the detectable capsules of synchronous multicentric small HCCs with distinctly nodular type on high-frequency IOUS could be considered an important finding differentiated from benign regenerative nodules. Thus, further study may be needed. In summary, high-frequency IOUS detected all capsules of 12 Conflicts of Interest No potential conflict of interest relevant to this article was reported. synchronous multicentric small HCCs with distinctly nodular type. The capsule of synchronous multicentric small HCCs with distinctly nodular type detected by IOUS showed a range of capsule coverage of the hypoechoic rim with or without hyperechoic foci (n=11) and the hyperechoic rim (n=1). The varying degrees of capsule coverage may be associated with the evolution of a regenerative nodule to early and advanced HCC. These detectable capsules should be considered another IOUS finding of synchronous multicentric small HCC with distinctly nodular type. Table 1). A. High-frequency IOUS shows a hyperechoic hepatocellular carcinoma (HCC) with detectable hypoechoic rim (arrowhead) containing hyperechoic foci (arrow). B. Photograph of a resected specimen shows a distinctly nodular HCC. C. Photomicrograph of the capsule in an anatomic location corresponding to the arrowheads in A and B shows that the hypoechoic areas of the capsule on high-frequency IOUS consist of a fibrous capsule (FC) and peritumoral fibrosis (PF) (arrowheads) (H&E, ×40). D. Photomicrograph of the capsule in an anatomic location corresponding to the arrow in A and B shows that the capsule consists of FC and entrapped hepatic parenchyma (EP) (H&E, ×40). The hyperechoic focus of the capsule in A is mainly composed of EP (arrows). N, nontumorous liver tissue.
2018-04-03T00:23:08.597Z
2016-04-09T00:00:00.000
{ "year": 2016, "sha1": "4ef7a0b97cafdfe2da16baa50feb5d45a10507b7", "oa_license": "CCBYNC", "oa_url": "http://www.e-ultrasonography.org/upload/usg-16001.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ef7a0b97cafdfe2da16baa50feb5d45a10507b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226663525
pes2o/s2orc
v3-fos-license
Seeing by proxy: a detailed analysis of an educational interaction at the telescope Astronomy education research is a growing field but the attention given to informal educational activities, such as telescope observations, museum visits or planetarium sessions, is still relatively scarce. In consequence, the area is poorly studied and understood. Addressing this gap, this present paper examines informal educational practices in an astronomical observatory through detailed analysis of a complete turn at the telescope by a small child, who is observing the Sun with the assistance of a guide. Using Ethnomethodology and Conversation Analysis this study investigates how this activity was produced in terms of structure and methods, the skills the participants have, and how the interaction between the visitor and the guide occurs. The study of these naturally occurring activities is done in-depth by the repeated inspection of video data, in order to identify the characteristics of the interaction, the organization of the talk and its implications as an educational event. The interactional nature of linguistic exchanges is highlighted; and the study of these activities reveals the practical methods used by guides and public. The present study contributes to our understanding of telescope observations as informal education activities; and shows the importance of research methods that are sensitive to naturally occurring events. Introduction Astronomy education research is a growing field of inquiry [1], and studies have shown that it is mainly focused on specific astronomic content and conceptions of students at school and university levels [1,2]. However, the informal 1 education of astronomy -the activities conducted in museums, science centres, observatories etc, with some structure and planning but also flexible, visitor-centred and collaborative [5] -is poorly studied [6,7] and, in consequence, not widely understood. Specifically regarding activities involving the use of telescopes, such as star parties or observations of the Sun, the research is virtually non-existent [8]. With the exception of one study of families at star parties [8], and another of solar observations at an educational observatory [9], the only studies found involving telescopes and education are the ones using remotetelescopes, but not in situ manipulations [e.g. 7, 10,11]. There are also studies concerning the construction of simple telescopes for the study of optics or teachers training [e.g . 12]; and the advising on which activities work or fail while conducting astronomical observations of the night sky with telescopes [13]. However, these studies do not explore data collected during actual observations. This comes as a surprise, as telescope observations constitute * Correspondence email address: moutinho@um.edu.mo. 1 The term informal education is the most used in international literature [3], however, in Brazilian literature the term non-formal is preferred [3]. For a good review of these terms in Portuguese in relation to astronomy education, see [4] a widely conducted outreach activity in astronomy education, in museums, science centres, observatories, and star parties. It is an activity with great potential to promote "compelling learning" as it involves multi-sensorial interaction (listening, seeing, touching) associated with "sharper focus and more memorable experiences" [14] (p. 203). Moreover, researchers advocate the introduction of more observational activities to enhance learning of astronomy [12,13,[15][16][17]. Even so, there is no knowledge about, for instance, how these activities are produced in terms of structure and methods, the skills guides have, or how interaction between the visitor and the guide occurs. In the general field of informal education, attention starts to be given to guided visits and the work of the guides [18,19] and it is acknowledged that their work is not simple as it involves a set of specific knowledge and skills [20]. Here also the research is scarce [18,21,22] "and specifically in the domain of science, limited attention has been paid to the role and practice of museum educators" [20] (p. 131). Some studies focus on the guides' conceptions of learning, their beliefs and reflection on their work [e.g. 23]; on content and pedagogic strategies [e.g . 24]; the role of these professionals inside their informal education institutions [25]; or their ideas about visitors' learning [26]. Some research also compares more or less structured guided interactions [27] or studies "how guides and their audiences produce spaces for showing and seeing" [28] (p. 2). Skills such as "communicating information, adapting to a specific audience, and maintaining a sense of humor" [29] or using questions to guide the interaction and generate conversation and changing intonation to keep the attention of the public [19] are referred as important. Also, methods for object presentation are proposed [30]. Nonetheless, there is still little research about the real practices, methods and interactions that produce a guided visit in situ, and virtually none, as already mentioned, concerning telescope-guided observations. Therefore, we propose an ethnomethodological study [31], through the detailed analysis of a recorded real event of outreach activities at the telescope, aiming at contributing to this not widely studied scenario by revealing how it is produced, on a moment to moment basis. Our interest in this paper is to indicate and describe the (collaborative) work involved in an instructed observation at a telescope. The participants (guide and visitor) are involved in an activity to observe the Sun and its sunspots as part of an outreach program, which takes place at an astronomical observatory in Coimbra, Portugal. The main questions we pursue here are (i) how do the guide and visitor work in cooperation to successfully achieve the goals of the observation? and (ii) what are the methods and skills used by the guide to orient the visitor to see the phenomenon (sunspots) that was observable on that day? This paper continues with a brief summary of our praxiological approach, followed by the methodology and the description of the setting and data collected. We then make a detailed analysis of the studied event and a discussion of the findings. We conclude with some potential contributions that this study can offer to the field of astronomy education. A praxiological approach to educational telescope observations In this paper we explore the ordinary ways in which a scientific observation is enacted and the mundane practices of doing a scientific observation with a child, who, as it will be possible to observe in our data, has little or no previous experience on how to manipulate a telescope. However, what do we gain in exploring the ordinary ways of teaching a young boy how to observe phenomena in the sky in an astronomical observatory? The topic of scientific literacy is increasingly important, with studies suggesting that as much as half the knowledge in scientific fields is learned outside the school formal education settings, with science learning being a 'lifelong', 'life-wide' and 'life-deep' process [32]. Based on Garfinkel [31], the founding father of ethnomethodology, by describing and explicating how people produce scientific literacy, we are gaining access to the alternate and usually neglected legacy of the objective reality of science education. In other words, we are describing the taken-for-granted "things" that participants do in order to get 'things' done in a guided visit to an astronomical observatory. These 'things' are practical orientations or ordinary methods that participants use to make sense of each other's actions. These methods revealed during their work are accountable episodes of their very own practices, i.e. these are self-explicating; these in situ accounts are preserved, for analysis, in the recordings. This is why a radical praxiological approach is necessary to identify the taken-for-granted, common details of the work produced by the participants involved in this activity. Otherwise, these details will go unnoticed, as they are in our everyday practices, and will not be retrievable as topics of inquiry in their own right. According to Garfinkel [31] (p. 104, parenthesis added): "Almost four decades of Ethnomethodological investigations provide evidence that a domain of things escapes from accountability with the same methods of technical formal analytic reason that are used to describe them adequately and evidently. The domain of things that escape from FA (formal analysis) accountability is astronomically massive in size and range." This "astronomically massive domain of things" is what can be studied by a radical praxiological approach, which uses "retrievable data" [33] to study the missed domain of instructed actions. Among these topics, we have identified a particular one for this article, which is the strategy that the guide uses to orient a boy's observation while he is at the telescope spotting the Sun. As the telescope has only one eyepiece the guide needs to know what the boy is looking at so that he can explain the phenomenon that should be highlighted during the boy's turn at the telescope. Through a set of methods (question-answer (QA) sequences; displays of engagement, silences, etc.) the guide can make sense of what the boy is actually seeing and then can turn the boy's observation to the phenomenon which was clearly visible on that day -the sunspots. We call this move 'seeing by proxy', since the guide does not have direct access to what the boy is seeing, whereas the boy, who has direct access to the image of the Sun, does not have enough knowledge to orient his view towards the sunspots and make sense of what they mean. Therefore we will explore here a point of intense cooperation between participants, which will make visible the "dark matter" of instruction [34], that is, the description of the taken-for-granted interactional contexts produced by participants in educational settings that provide for and surround the instructional events that become a focus of analysis for educational research. 'Seeing by proxy' refers both to the equipmental enhancement provided to the human eye by the telescope, and crucially, to a defining characteristic of astronomy education, in terms of viewing astronomical objects as instructed actions. Methodology As mentioned above, this paper reports an ethnomethodological analysis of a boy's turn at the telescope oriented by a guide in an astronomical observatory. To do so, the observation at the telescope was video recorded. These data (video and audio) were analysed in detail and a transcription of the interaction, using a notation system developed by Gail Jefferson [35], was produced (see Table 1, for the transcript and its translation; see Appendix I for the notation system). The recording of naturally occurring events allows multiple visualizations and indepth study of the interactions and its particularities [36]. Through close examination, it is possible to produce a detailed analysis of the sequentiality of the interaction, including the pauses, the silences, the gestures, the smiles and looks, the manipulation of the telescope and the situated content of that interaction, which, all together, produce the event in question. The ethnomethodological approach provides us with devices to bring the procedures of this educational activity (at its very details) to the explicit attention of the analyst [37]. By examining the turn-taking organization of talk we are able to make explicit what is currently tacit [38]. Therefore, ethnomethodology takes 'cognitive' or 'perceptual' activities (such as 'seeing') as interactionally organized practices rather than individual 'perceptual' or 'mental processes' [39]. This means that the effort to see phenomena in the sky does not depend on the guide or the child alone, but on a cooperative effort of both. It is through (and in) their conjoint actions that they will produce an observation at the telescope that indicates how teaching and learning are accomplished in practice, through an intense cooperation of the participants. This intense cooperation will be explored in this paper, exhibiting not only the detailed description of the participants' practices but also explicating such practices in a way that readers can have access to instances of educational practical work and adapt its locally produced particularities to their own distinct circumstances. According to Goodwin [40] (p. 607), the advantage of using what are referred to above as retrievable data is being able to make "repeated, detailed examination of actual sequences of talk and bodied work practices in the settings where practitioners actually perform these activities". Such an approach to data and data analysis offers a practical view on the work of science educators, who can benefit from the demonstration of successful educational accomplishments. The setting and the data The data analysed in this article were recorded at the Geophysical and Astronomical Observatory of the University of Coimbra 2 (OGAUC), in Portugal. The OGAUC is a centenary observatory and a prestigious scientific institution for the study of the Sun and has one of the largest and most complete collections of heliograms (pictures of the Sun), obtained daily for research proposes, since 1926. It is also a complete centre of informal education for organizing astronomical visits for the general public and schools. The Observatory complex includes a museum with astronomical instruments and artefacts, a planetarium, a dome with a telescope, used for observations of the Sun and the night sky, and a spectroheliograph (special equipment used to photograph the Sun) that can be visited. The guides who conduct the visits and activities are trained university students or professional astronomers who also work and do research in the observatory. The data are comprised by a 2.02 minutes excerpt of the beginning of one visit to the astronomical dome with families and individual visitors. More specifically it is the turn of the first visitor at the telescope, a young boy, who (with the help of his mother and the guide) is ready to do the observation of the Sun through the telescope. The dome is a circular building with a round ceiling. The ceiling rotates horizontally and has a window, which goes from the rotating base up to the top and can be opened. Inside the building there is a cylindrical platform in the middle where the telescope is positioned. The plat-2 http://www.astro.mat.uc.pt/novo/observatorio/site/index2.html. The permission to use the material analysed here was sought and the approval was issued by the mentioned institution. The official consent is available from authors. form can be accessed through a round staircase. These details can be seen in Figure 1. Prior to the visitor's turn, the guide -a professional astronomer -set the telescope and the dome ready for the observation. While doing so, the public is already inside the dome, forming a queue and awaiting their turns along the stairs. When everything is set the guide starts calling visitors one by one to look through the telescope eyepiece and observe the Sun. If the visitor is a young child he or she can use a wooden step to reach the eyepiece with the help of an adult (in this case, the boy's mother). When using this kind of telescope with a filter, the Sun will look like a yellowish disc. Depending on the day, it can also show some dark spots -the sunspots. That was the case of the observation recorded. The sunspots are not always present and they come and go within days. They represent a phenomenon ruled by the complex magnetic activity of the Sun, when parts of its surface are colder than the rest. Consequently, when observed, they look much darker than its surroundings. Sunspots are always a positive feature when doing science outreach since they add details to an otherwise simple yellow solar disc. The sunspots pattern that the visitors observed on this particular visit can be seen in Figure 2, which is a black and white image from the spectroheliograph recorded at the Observatory on that exact day of the observation. Analysis: a turn at the telescope In this section, we present a detailed analysis of the complete turn of a boy at the telescope, for observing the Sun. As mentioned before, it is a special turn because it is the first one of the day, serving not just to instruct the visitor doing the observation but also to demonstrate it to the others in line who are able to see and hear the interaction between the guide and the visitor. Being the first one in line also means that the explanation is done for the first time. As such, this turn serves as a series of "explicative transactions" -an interactional event "in which what one does next will be seen as defining the importance or significance of what another did before" [41] (p. 228). The guide provides instructions and explanations for the boy but in doing so provides also instructions and explanations for overhearing visitors who are waiting for 'their' turn at the telescope. First, we will be looking at the parts of the interaction as sequential constituents of a complete turn at the telescope. This allows us to look at its internal structure first and focus on its details later, when we highlight specific features of interest exhibited by the data. Looking closely at this first turn, we propose that it can be divided into different sequential sections (see Table 1). It begins with an initiation (lines 1-3), an invitation to come and look through the telescope. Then we have the setting up (lines [4][5][6][7][8][9][10][11][12][13][14][15][16], that is, the instructions on seeing and positioning to see. After that, we have the seeing part, or the getting to see part (lines . This is followed by the explaining the seeing (lines [39][40][41][42][43][44][45][46][47][48][49][50][51], where the boy is still seeing, but now makes sense of the phenomenon in view. From line 52 to line 59 the guide and the visitor "engage" in the stopping seeing part, and finally the closing part, where the turn comes to an end (lines 60-65). From line 66 onwards the boy is already out of the bench and going away whilst the guide positions the telescope again for the next visitor. We will now look closely at what is happening in each of these parts and explicate their boundaries. Of course, these are not tight and rigid. Consequently, these should not be viewed as discrete entities. Initiation (lines 1-3) The turn starts with an initiation sequence, done after the preparation of the dome and telescope for the observation (before the beginning of the transcription). With a gesture (line 1) and an invitation ("would you like to be the first/ to come u:p?" -lines 2 and 3) the guide calls the first person in line to approach. However, this invitation and gesture (see figure 3) are more than just marking the beginning of the boy's turn. Pointing to the bench and saying "would you like to be the first to come up" turns out to be also the first instruction the guide gives the boy -to step on the bench. The position of the eyepiece is too high for the boy's height. Visitors in this situation should climb the bench in order to reach the eyepiece in a comfortable position to use the telescope. In our current case the instruction is also directed to the mother, who helps the child to climb the bench, and indirectly to the other visitors waiting in line for their turns. Setting up (lines 4-16) The setting up starts with the bench positioning at line 4 and ends at line 16. Some preparatory work needs to be done before the actual observation starts. This work has two distinct parts: there are instructions on seeing and positioning to see. From line 9 to 11 the guide instructs the child on how to deal with the telescope -"when you look through don't grab the small tu:be /because if so it will shake more /and the more it shakes (0.5) the less we see" -basically, it is not to be touched. Line 7 is also an instruction on how to see -"now you can look through =" (the eyepiece). But this utterance has a double function. It is also related to the positioning of the boy, so that the guide can evaluate if the boy is ready to start. It turns out that he is not, and the bench needs to be moved again. The bench is then moved (see right frames of figure 4), the eyepiece is repositioned, and everything is apparently ready "like this" (line 16) to start the observation. Looking at the instruction given from lines 9 to 12 more closely, the instruction (line 9) is to not touch the telescope, which he justifies (lines 10-11). Line 12 is the second part of the adjacency-pair 'instructionconfirmation', i.e. a confirmation of understanding of the instruction (a nod from the boy). What is of interest here is the changing of subject from the instruction to the justification -"when you look through don't grab the small tu:be /because if so it will shake more /and the more it shakes (0.5) the less we see" (lines 9-11). The instruction is directed to the child, which serves as an "explicative transaction" [41], because as an instruction it is directed to the cohort of visitors. If the small tube (the eyepiece) is grabbed the telescope will shake and there will be a negative consequence (seeing less) -"the more it shakes (0.5) the less we see "(line 11). Who is addressed with this "we"? "We" is an indexical (or contextual) reference; to whom is it referring? One potential inference is that "we" is the boy plus the guide, another one is that it is all the visitors together. Therefore, the consequence of shaking the eyepiece does not affect only the child but also other people. This is even clearer if we take into consideration the other two features of the interaction: the short pause (0.5 seconds) before the delivery of the consequence (line 11) working as a boundary between two recipients; and the eye contact of the guide. The guide is adjusting the eyepiece while talking to the boy, looking at the telescope until line 10 (see left frames of figure 4) and changing his gaze to the boy while uttering line 11. This also allows him to seek confirmation of the instruction, which is delivered by the boy at line 12. This use of "we", being more or less inclusive, also gives a sense of co-observation to the interaction: the boy will not be doing the observation alone. This will be further discussed in this article. Getting to see (lines 17 to 38) Finally, all seems ready and clear, and the observation can start. At line 17 the child starts looking through the eyepiece but only starts the observation at line 26. An activity like this one may require adjustments along the way. The guide understands that the boy cannot properly see the object and further adjusts the eyepiece until the "yes!" at line 26 is heard. Two things inform the guide about the non-seeing status. First, the silence to the questions at line 19: "what do you see?" and again at line 23: "are you seeing the yellow moon?". Second, the body language of the child: between lines 17 and 25 the child is moving his head trying different positions and angles for looking inside the eyepiece (see left frames of figure 5). These movements are subtle but clearly show to the guide that the child is still not seeing well, since the lens is positioned on a level that is too high for the height of the boy's eyes. He then adjusts the equipment (line 25second frame from the left on figure 5) until the boy shows that he is seeing something by answering the question uttered at line 23. Looking closely at the two questions the guide asks the boy, we can see that he was also testing another hypothesis -that the boy was seeing but did not know how to describe it or what he was supposed to see. If that was the case, the silence to the open question "what do you see?" (line 19) was also justified; and the solution of asking another question with a "candidate answer" [42] -"are you seeing the yellow moon?" (line 23) also points to the same possibility. Moreover, this offering of a candidate answer is done almost simultaneously with the adjustment of the eyepiece. This leads us conclude that the guide is testing different solutions at the same time. The level of certitude of what the boy is actually seeing through the eyepiece increases with a series of prompts. The guide does not have direct access to what the child sees, because a limitation of the telescope is that it only allows for viewing by one person at a time. We may say that he, the guide, views by proxy, but to do so he needs to find ways to gain access to what the child is seeing. He does that by questioning the boy (lines 19, 23, 27, 28, 30, 32, 34 and 36) whereby 'getting to see' is recognized by question-answer adjacency pairs [43]. However, the questions and answers are varied. Looking at lines 19 and 23, we can see that the open question (line 19 -"what do you se::e:?") did not receive an answer so a different question was asked (line 23 -"are you seeing the yellow moon?"). This question gives a target to the boy and is asked in a very specific way, designed for that specific recipient. They are not seeing the moon but the Sun. However, astronomical concepts such as the shape of the Sun, Earth and Moon are connected and are influenced by observation [15,44]. So, the boy is probably more familiar with the sight of a big round yellow moon in the night sky and this analogy might help. What is being said is something like "do you see a round yellow disc like the Moon?" or "what you are supposed to be seeing is similar in colour and shape to the Moon". The guide continues with simple analogies at line 28 "yellow ball", and the boy answers again affirmatively. We can see that the question at line 28 is the first one after the adjustment period. Lines 26 and 27 (the boy saying "yes" and the guide asking "are you seeing?") are almost simultaneous so the guide (and viewers of the recording of this interaction) can say that it is only now that the child is seeing something. The question formulation at line 28 shows exactly that -"is it a yellow ball?". The indexical term "it" refers here to "that thing that you are seeing". The boy promptly answers the question affirmatively. The guide keeps asking about the colour, giving the boy a different option at line 30 ("or is it red?"), because he needs to be sure what the boy is seeing. Answers such as "yes" or repetitions of what was said previously are claims, rather than displays of understanding [45][46][47], or in our case 'displays of seeing'. As Jefferson [48] states, claims are recognizable features of "passive recipiency". What the guide is looking for is an "active" display to move forward. A claim is not enough. Similar to "understanding", seeing is also "a practical achievement of participants through talk" [47] (p. 93). It is crucial for the guide to adjudge successful observation that the viewer (in this case, the boy) does not just claim that he is seeing but that he displays that he is seeing. In the question-answer pair under analysis, knowing that the image of the Sun is yellow, the guide would be eventually expecting for a negative answer, but the boy pauses for a full second, observes and returns with a description -"it is yellow and a little red" (line 31) (see right frames of figure 5). Seeking understanding to move on, the guide formulates what the child said offering a "candidate reading" [46] -"ok. so it is orange" (line 32). A formulation acts "producing a transformation or paraphrase of some prior utterance. Such paraphrases preserve relevant features of a prior utterance or utterances while also recasting them. They thus manifest three central properties: preservation, deletion and transformation" [46] (p. 129). As these authors in [46] also suggest, a formulation asks for a confirmation, disconfirmation or more generally a decision (p. 141), which the boy gives at line 33 (in this case a confirmation). At this point there is no doubt that the child is seeing the yellow disc of the Sun in his field of view. Together they arrived at that observation, together they are seeing the Sun -the boy directly and the guide by proxy. The boy knows that the disc is yellow because the guide tells him so (lines 23, 28 and 34). For the boy the disc might look a little more orange, or he is just not yet very good at distinguishing colours. Yet the guide has enough elements to evaluate this exchange as a display of seeing/understanding. He then moves on. Since the big picture was identified, the guide calls the child's attention to the details of the image, and points out that there are other things to be seen. He continues the QA sequence (lines 34 and 36). It allows him to redirect the look of the boy and at the same time investigate if the boy is really seeing what he was 'supposed to see'. The guide first asks if the image is all yellow, or if it has spots (line 34). The boy exclaims "wow! it has some spots" (line 35). This conclusion is not enough for the guide, who proceeds to ask about the colour of the spots. "uhm black" (line 37), says the boy, observing carefully. The question here is that the guide is not just trying to find out if the boy is seeing it, he is instructing him, guiding him to see the phenomenon, which makes the experience pedagogically meaningful. He "gives" him the spots and then asks him to look at them in detail and describe their characteristics, in line with what they have been doing previously with the colour of the solar disc. This can also be considered a teaching-learning moment based on the way scientific observations and discoveries at the telescope look like and work -you look closely, you identify and you describe. As Lynch [49] states, "simple or common examples enable insight into the complex and rare skills of the scientist, and their use suggests that scientific observation is a matter of learning to see things under specialized circumstances" (p. 90). As discussed, the getting to see part of the interaction is an instructed action [31]. The technique the guide uses is progressively going from the disc to the spots, from the big to the detail. Interactionally, child and guide communicate, not looking at each other, not seeing the same thing, but progressing in the observation together. The child, attending the instructions and answering the questions, was able to see the Sun and the spots. The guide, giving instructions and asking questions, was able to lead the boy to see them and to be sure he was actually seeing what he was supposed to see. We highlight that instructions and questions, and also seeing and answers, were reflexively related and occur in quick succession. Instructions were given in the form of questions, that were also ways of gaining access to the boy's view. Seeing meant successfully following the instructions and answering the questions. In other words, seeing was a display of understanding [50]. At the end of what we considered to be the 'gettingto-see part' the guide gives the child a moment to absorb and contemplate the content of the instructed seeing (line 38). This pause marks a transition. The seeing part gets done (although the observation continues). Now it is time to explain what had just been seen. Explaining the seeing (lines 39-51) The boy claims to have seen some spots, but he still needs someone to explain what they mean. This part of the event starts with the formulation "you just saw sunspots" (line 39) -a "formulation of upshot" [46] produced by the guide, commenting upon what the child saw. This device, more than summarizing and clarifying, marks that they arrived at a critical point, a point where the 'product of their seeing' was not enough to make sense of the observed phenomenon. More than that, seeing sunspots is special. The boy is told that the Sun has spots, that it is not every day that the Sun has spots (line 40) and that he is so lucky because today there are many of them (line 41). All this information is delivered slowly, with pauses in between, while the child continues to look through the eyepiece lens. The work of the guide is not just to show (to lead the boy to see) that the Sun has spots, but also explaining the special character of what is being seen. The guide highlights the importance of the sunspots and transforms this observation into a memorable moment. By the boy's display of enthusiasm at line 35 ("wow! it has some spots") and his complete engagement when looking through the telescope, we can say that not much extra work is necessary to make this a special occasion. The boy observes the Sun for 1:17 minutes almost non-stop (from line 17 to 59 the child moves his eyes away from the lens only once, at line 47). However, when looking at the practical issues involved in the in vivo collaborative seeing of the guide and the boy, we will find that a lot of work was still necessary to account for the locally and endogenously achieved completeness of their observation [31]. The boys' engagement in the activity of seeing through the telescope is visible in the next few utterances (from line 43 onwards). After a big pause (line 42) the guide is ready to start a new part of the explanation. First, the guide seems to start that explanation at line 43 ("and the-the spots") but rapidly changes his strategy. This looks like it is because the child is so 'entertained' with the equipment that he does not look at the guide when he starts his explanation. Instead of providing more information, the guide makes a quick question "is the Sun hot or cold?" (line 44). He waits, but gets no answer. He then gets closer to the boy and asks again, and again, and finally gets an answer. At this point (line 47) the boy withdraws from the telescope and answers the guides' question very quickly ("hot"), immediately getting back to his business of looking at the Sun and sunspots through the telescope. The change of strategy proves to be a good way of capturing the child's attention back. Once again, the guide made a good reading of the situation and reacted accordingly. Questioning in this case seems to be a more interactive form of communication, and allows the guide to find out the level of knowledge of the child. The child obviously knows that the Sun is hot, answering with a tone that can be heard as "that is obvious and/or here is your answer and now leave me alone". The guide, on his side, makes a long "ahhh" (line 48), like a "I finally got an answer". He then repeats and summarizes it (formulates it) and finally explains what the sunspots are. He is very skilful with his pauses -at line 50 another one can be identified, separating the cause (see line 50) and the effect (see line 51) of the explanation. Again, a big pause marks the end of the explanation (line 52). The sunspots have been seen and explained. It is time to conclude this activity and move on to the next visitor. Stopping seeing (lines 52 to 60) One last thing is taught -the image of the Sun moves and gets out of place due to the Earth's rotation. As explained to another visitor later during this same visit, this telescope does not have a movement of compensation for Earth's rotation and therefore, within about 2 minutes, the image of the Sun starts getting out of the eyepiece's field of view. That is what is happening, and is commented upon during lines 53 and 59. The experience of the guide allows him to estimate that the image is probably starting to get out of sight (and may also mention this phenomenon as an excuse to bring the boy's turn at the telescope to a close). In other words, the guide is opening up a closing [43], since the topic of the talk (the sunspots) is being replaced by another one (the displacement of the image). There is a long pause of 2.5 seconds (line 52) that depicts this shift of topic. The guide then continues to ask questions to gain access to what the boy is seeing (line 53 and 54), to see if it matches what the guide predicts is happening -the image of the Sun is going out of view. Before any answer from the child, the guide starts moving the telescope using the remote control, without being able to see the consequences of his actions. Is he trying to correct the displacement or to make it bigger? The boy eventually claims that he is not seeing the full image of the Sun anymore, but even so he still waits 5 seconds to move his eyes away from the eyepiece. It seems that the boy cannot get enough of it. However, the movements and the displacement had that dissuasive effect, making the boy stop his 'seeing'. The displacement of the image is big, as it is possible to infer by the "eh lá" interjection made by the guide at line 68 (this interjection is used in Portuguese to show surprise when something sounds exaggerated) and the long adjustments that the guide has to make later, when he regains access to the equipment (line 69). The boy is not looking through the telescope anymore. The turn is almost brought to an end. Closing (lines 61 to 65) The child is not looking through the telescope anymore and briefly looks at the guide, smiling, waiting for guidance. The 'nextness' that the boy's action triggers is very clear: someone needs to end his turn at the telescope for him. The guide and the mother do it together. First the guide, at line 61 looks at the boy and gives him a big smile at the same time of a "ahnn?" (see left frame of figure 6). This utterance can be considered a pre-closing clause [43]. The interjection and the smile directly looking at the boy, who is also smiling, has the value of wrapping up, of asking for an evaluation, such as saying "it is something ahhn?". The guide does not ask him if the experience at the telescope was worth it, he knows it was (by the boy's reactions). Instead he shows him a shared enthusiasm, the evaluation of the experience that he has inferred from the child attitude so far. The evaluation signs the ending of the activity. The mother picks on that and intervenes (line 63) using an expression in Portuguese: "pronto", which means "done". The guide repeats it at line 64 reinforcing the conclusion of the activity and the mother finally helps the boy step down from the bench at line 65 (see second frame from the left of figure 6). By saying "let's let the other boy see it" (line 65), the mother is justifying the need to stop the observation. It is a well-known 'parent's excuse' to get a child to stop doing something as it has a normative value in it. In effect it is saying "other people are waiting in line to see, let's not keep them waiting longer". This move helps the boy follow his mother's claim, by putting himself in the position of the person waiting for him to finish. All this happens quickly, in about 3 seconds, and is produced conjointly by the mother and the guide, following this sequential order: evaluation (line 61)-completion (line 63)-reinforcement (line 64)justifying ending (line 65)-moving out (line 65). Mediation, asymmetries and seeing by proxy Expertise and non-expertise are core issues in instructional and educational environments such as the one studied here. Both phenomena are visible in numerous accounts, but more than that, the fundamental pair expertise-non-expertise holds asymmetries in knowledge and asymmetries in perception [51], that educational events such as this seek to reduce. Therefore, exploring how such knowledge imbalances are made visible, how expertise and non-expertise are displayed, is crucial to understanding learning and instruction in informal educational activities. Nonetheless, the observation at the telescope reveals not just a one-way asymmetry of knowledge but two, making it very particular. On one side we have the guide with the knowledge about the institution, the telescope manipulation and the astronomy phenomena. On the other we have the visitor, who has the knowledge of what he is seeing, in other words, the "ownership of his experience" [52]. Being equipmentally mediated by a telescope, this interaction has the particularity of not allowing the guide to see what the visitor is seeing because it is physically impossible to have two persons looking through the same eyepiece at the same time. So there is a double asymmetry of knowledge and of points of view that are fundamental to shape the interaction at the telescope. The guide, in order to make this observation happen needs to gain access to what the boy is seeing, and the boy needs to provide enough information so that the instructional order can happen, so that he can be guided. The guide is doing what we call "seeing by proxy" and to do so he uses a set of methods: (i) the guide 'reads' the boy's body movements and long pauses between question and answer sequences. At line 25, for example, the guide adjusts the eyepiece, acknowledging that the boy was not seeing anything. He adjusts it until the child finally says "yes" (line 26), claiming that he is now seeing; (ii) the guide asks sets of questions. A closer look at the questions asked by the guide shows that these produce two different things: on one hand, they are an assessment of the boy's observation, which allows the guide to continue the instructional seeing. On the other hand, these questions are also produced to inform the guide's indirect seeing. They instruct the boy's actual seeing and inform the guide's "seeing by proxy". That is related to what Goodwin [40] calls "professional vision". The objective is to see the sunspots against the background (the Sun) and the spots as sunspots, as seen by a member of a particular professional community who can identify them. They are not spots on the lens, they are not clouds, but specific features of the Sun: phenomena on its surface with certain characteristics. The guide, as a member of the astronomy community, sees that when he looks through the eyepiece. The boy does not have access to the same expertise, and needs to be told what he is looking at. Educational and instructional methods Imbued in the production of this interaction is its instructional and educational nature. First there is the instruction on how to use a telescope and the teaching about its functioning. It starts right before the turn studied here, with the demonstration of how to align and point the telescope to the Sun with all its particular details, and continues throughout the interaction (e.g. lines 7,[9][10][11]. We can say in a simple way that the main educational objective of this interaction was to show the Sun and its spots and describe their characteristics briefly. This involved teaching the boy to observe and not just to see; what Eriksson et al. [53] call to have discernment -"coming to know what to focus on and how to appropriately interpret it for a given context" (p. 168). We can also say that the instructional objective was to lead the visitor to look through the telescope properly and see the Sun disc and the visible sunspots. To achieve this, the guide needed to work in cooperation with the visitor, guide his observation (instruct the seeing), and clarifying or "seeing" what the visitor was actually seeing (seeing by proxy) by gaining indirect access to the visitor's point of view. All of these accomplishments are co-produced by the participants in the event. From an educational point of view, the way the guiding occurs, from its initiation until its close, seems to have reached its objectives and leave a satisfied and smiling "client". Also, looking at the nature of the event, we cannot forget its continuation, its insertion into a bigger event -the observation of the whole group. While directed to the boy, the guide knows that the rest of the group is listening and learning. This can be seen by the absence of instructions given for the rest of the visitors waiting in line. This careful and detail guiding does not happen again, since it works as an "explicative transaction" [41] for the other visitors who are waiting in line. Taking into consideration that broader audience, the guide could have used this opportunity to talk about the relative size of the sunspots (many sunspots are as big as the Earth) as it is done in the activities studied elsewhere [9,54]. The guide, throughout the observations of the other visitors that come after the child's turn described here, talks about the sunspots' characteristics and origin, and in general about the activity of the Sun, but he doesn't mention the size of the spots. The moment when the child sees them for the first time, being also the moment the group "sees it" for the first time, would be a good opportunity to bring it up. As mentioned before, while providing explanations to the boy, the guide is also providing these to the other visitors who are listening to the interaction while waiting in line. Size and distance scales in astronomy are crucial ideas to be communicated and taught [54]. The boy at the telescope was too young to understand these -research suggests that, at best, only in primary school but most likely around 12-14 years old, concepts of relative size and distance begin to be grasped [54], but giving that information to the rest of the visitors while talking to the boy would provide the rest of the group with another dimension of the phenomenon when seeing the sunspots with the telescope -a feeling of the size of the spots, a feeling of the size of the Sun. Focusing on the educational and instructional methods used by the guide, we were able to identify a number of diverse features and devices present in the conversation. First of all, we highlight the pauses. They have an important role in shaping the structure of the interaction. These devices are mainly of four different types: i) the long pauses between the main parts of the interaction (lines 4,17,38,52,60). These pauses seem to mark the changing of what is being done in the interaction, signalling that and helping in the transition; ii) the waiting pauses. These are pauses used to wait for the viewing to occur or to check if it is occurring (e.g. lines 17,22,26). These seem fundamental to allow time for the observation to happen and have feedback; iii) the pauses during the explanations. These pauses split the explanations into different sections; again, organizing what is being communicated and highlighting its different parts (e.g. lines 11, 39 -41, 50 -51) or giving time for explanations to align with contemporaneous observations (e.g. lines 42,52,56); iv) the pauses after questions, allowing time to answer. The guide skilfully uses pauses with that purpose (e.g. lines 22,29,45,46,55). These different pauses are used with precision. As seen, a close analysis shows they are not randomly placed. They are devices used as part of a method to achieve something. We further note that some of the pauses have multiple purposes. Another instructional method used is the questionanswer pair. This is a well-known method used in guided visits and present in guide-training literature [30]. Questions are ways to gain access to information and the interaction builds and progresses around that update of information. Camhi [30] lists this as one of the methods observed in guided visits interactions and highlights that "there are many categories of questions, each with its own underlying educational or communicative rationale" (p. 283). In this case we identified two types of questions: i) questions to gain access to the seeing (e.g. lines 19,27,28,34,36,53). These can be open, or questions with candidate answers. The guide chooses the preferred question type in a moment-to-moment basis, depending on the feedback from the boy, as discussed earlier in this paper. ii) questions to gain access to previous knowledge (e.g. lines [44][45][46]. As seen before, question-answer devices make the production of the event more interactive between guide and child. They also adapt ensuing instruction to the specific recipient, being produced based on the knowledge of the boy and on what he is experiencing. The third instructional method identified is formulation. Formulations are used here to gain access to the understanding (e.g. line 32) and to display understanding (e.g. line 31). That is important for the instructional and educational sequence of this event. Understanding needs to be achieved to move forward effectively. Formulations are also used to teach, as a way of making sense of what is being seen, like in line 39 -"you just saw sunspots". A fourth instructional method is related to the two previous methods -the search for displays of understanding. The pursuit of displays of understanding (and seeing) is crucial for the objectives of showing the Sun and the sunspot to the visitor. This -seeing by proxy -involves knowing the phenomenon that is sought, knowing the contingencies of the Earth's rotation and how it affects the observation, and knowing how the equipment operates, in detail. Analogies and non-scientific language [16], which can be seen as methods for designing recipient design mechanisms, are also methods used throughout the interaction captured here as data. Examples of this occur at line 9 "don't grab the small tu:be", referring to the eyepiece; at line 23 "are you seeing the yellow moon?", and at line 28 "it is a yellow ball?", referring to the Sun; and at line 34 "does it have eh spots?" referring to the sunspots. Another instructional method used in this interaction is the progressive focusing of the observation. The guide goes from showing the big picture first (the Sun disk) to end with the details (the sunspots). Finally, the specific structure of the whole interaction. Step by step the guide shows, makes 'discoveries' with the boy, and then explains. He does so guiding the boy, also allowing him space and time to learn and do it by himself, in the discovery of where to look, how to position the body to see, what to look for, how to describe it, how to look for details, contemplate, and learn what was discovered. This course of action conducts the child into a self-discovery, making the experience more meaningful for the visitor. Skills and display of expertise In this paper we do not assume this guide's expertise based upon his occupational role. The guide exhibits expertise in the use of the telescope and astronomical observation throughout the visit. Prior to the guide's interaction with the first visitor, the group is already in the dome, forming a line and watching the guide prepare all the equipment to the observation of the Sun. That is the first exhibition of expertise. He then continues giving instructions to the boy, first to stand forward and step on the bench, second to look through the eyepiece but not touching it with his hands. He carries on further adjusting the eyepiece to the eye of the visitor. He does that even without a direct request from the boy or complain that he is not seeing well. He seems to understand it by simply evaluating the boy's reaction and the position of the eyepiece in relation to the boy's eye. Providing information about the instruments or the observation of the Sun and sunspots are other explicit situated accounts of expertise. For example, at lines 49-51, explaining what the sunspots are, or line 59, stating that the Sun moves very fast. At line 53 there is another exchange showing knowledge being applied. Saying "and now they are getting out of place right?" the guide displays his knowledge of the functioning of this particular telescope, knowing that without a motor compensating for the movement of the Earth's rotation, at that point the image is starting to "run away". Furthermore, the guide mobilizes a set of skills, exhibited in and by his actions and interactions in this educational event. Looking at the literature concerning the skills of educators in museums, Tran and King [20] propose a group of six components -"context, choice and motivation, objects, content, theories of learning, and talk" (p. 138). Also, Barros, Langhi, and Marandino [19] highlight the importance of generating conversation to understand the level of knowledge of the public, the use of questions in conducting the interaction, and the flexibility to adapt the topic to different public as skills of the guides. Our praxiological analysis reveals how such skills are constituted "in its circumstantial detail" [55], i.e. the concretization of formal-analytic notions that rely on unexplicated, common sense practices, describing them in their specifics, and accounting for them as in situ, in vivo work. Praxiological analysis "respecifies" components derived from formal-analytic instruments such as surveys and desk reviews, demonstrating what "professionalism" and "expertise" actually involve and thus provides for more sensitive discernment of the skills of astronomy education as its lived work. Some of these skills identified include being able to: • "read" the body language of the visitors, for instance to understand the right position to look through the eyepiece. That implies the expertise of knowing how to observe and the functioning of the telescope; • describe the characteristics of the image displayed or ask for descriptions of it. That implies knowing the characteristics of the objects observed; • describe the functioning of the telescope and the actions required to prepare it for the observation; • use "adequate language", or to "recipient-design" [56] descriptions for the specific cohort of visitors, which will be different with each tour; • wait, to give visitors time to observe, and to give them time to answer questions; • guide the observation up to a point where it is possible to have a sense of discovery (this requires withholding some of the answers and, through using guided QA sequences and instructions, carry the visitor through the observation); • give simple and appropriate explanations while the visitor is looking through the telescope. Final remarks As a participant of this educational and outreach event, this boy was fortunate to have been at the head of the queue to look through the eyepiece of the telescope. He was gently guided in a discovery of the same phenomenon -the sunspots -that Galileo saw in the beginning of the seventeenth century 3 . This kind of informal educational setting, with a real telescope, allowing a real astronomic observation to get done, has the "ability to create memorable, meaningful, and highly contextualized experiences" which "facilitate learning" [14] (p. 177). However, as seen, producing a telescope observation of the Sun is complex business. On the visitors' side it involved the right positioning and manipulation of the telescope, looking properly through the eyepiece, seeing the yellow disc, identifying some spots on the yellow disc, learning that the yellow disc is the Sun, learning that the spots are actually on the Sun and are sunspots. Doing all this in less than two minutes, before the image is out of view due to the rotation of the Earth. For many people the encounter with a telescope in informal educational settings is a first-time experience. Looking through the eyepiece is intuitively available, yet how to look properly and see, how to position the eye, how to adjust, what to expect to see, is learned at the moment of the observation, with the help of the guide. As Meyer et al. [29] state, being a good guide is not just mastering the content of the observation, but also requires "the skills to convey the content in an accessible and engaging manner" (p. 55). Studies suggest [20] that informal educators are paying attention to the visitor's particularities and thus adapting these particularities to their specific needs and interests. The single-case analysis we provide in this paper confirms these findings, but it does so in concrete detail. The guide is a professional astronomer and he is also an expert in communicating and understanding a visitor's level of knowledge. Through seeing by proxy, the reflexive relation of observation and lines of questions, the guide ascertains what needs to be said in order to take the visitor through the observation, to help the visitor address the eyepiece correctly, to guide the visitor's seeing the phenomenon observable on that day, and explain to the visitor what is being seen, and its significance. We suggest that this seeing by proxy aspect of guided observations at the telescope is a central characteristic of informal astronomy education events, which must be taken into account while studying or preparing telescope observations. An ethnomethodological look at these activities reveals an array of practices that constitute activities, which are mostly taken-for-granted and would go unnoticed [57] if we do not analyse a single turn at the telescope within an astronomy education event in its details. The detailed study of this turn at the telescope highlights its mechanisms, parts, cooperative work, methods, skills and expertise mobilized to make astronomy education happen. In our view this is fundamental to understand this activity and should be the starting point to study it. In consonance with Zemel and Koschmann [58], studying real events, real observations and guide-public interactions "allows for the analytical inspection of how instructed experiences are accomplished" (p. 165). We suggest the results presented and discussed here contribute to the study of astronomy educational activities at the telescope. The identification of the characteristics of the interaction at the telescope -the asymmetries of knowledge, methods and how it happens in practice -help us get a better understanding of this enterprise. Moreover, both skills and accounts of expertise can be used as guidelines for evaluating the activities and the work of guides. Together with the structure and methods identified, these skills and accounts can also be useful in the design of training programmes of those guides and in the planning of activities, including supporting materials and instruments such as written instructions and audio guides. Future research should focus on how to better understand these informal educational activities -in particular the skills of the guides and the in situ practices of both guides and visitors while producing an astronomical observation. Our praxiological approach takes every event as unique, as "another first time" [57] (p. 9), but reveals the massive presence of ordinary, bespoke practices that the participants use to accomplish it, allowing us to learn from them and describe the area.
2020-08-06T09:07:18.303Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "fb968db7e018503b02d4c0ab68dce583ef8a94a9", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbef/v42/1806-9126-RBEF-42-e20190354.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2adb036c23aa1cb62363eaa7baaf135e79bf548", "s2fieldsofstudy": [ "Physics", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
231663526
pes2o/s2orc
v3-fos-license
Network meta-analysis on the comparative efficacy of family interventions for psychotic disorders: a protocol Introduction Family interventions are effective and are strongly recommended for psychotic disorders. However, there is a variety of intervention types, and their differential efficacy is widely unclear. The aim of the planned network meta-analysis (NMA) is to compare the efficacy of family interventions that differ in content (eg, psychoeducation, mutual support, skills training) and format (eg, number of sessions, inclusion of patients, form of delivery). Methods and analysis We will include randomised controlled trials comparing psychosocial interventions directed at the adult relatives, friends or non-professional carers of people with a diagnosis of a psychotic disorder (schizophrenia spectrum) to any kind of control condition. The main outcomes will be global clinical state for the patients and coping with psychosis as well as attitudes towards psychosis for the relatives. Additional outcomes will be severity of symptoms, functioning, burden and compliance/drop-out. We conducted a comprehensive search of Cochrane Central Register of Controlled Trials, MEDLINE(R), PsycINFO, Cumulative Index to Nursing & Allied Health Literature (8 August 2019) and reference lists of review articles. Full-text assessment of eligibility, data extraction and risk-of-bias assessment will be done by two independent reviewers. An NMA will be conducted for any of the planned outcomes and intervention characteristics for which sufficient and appropriate data are available. The analyses will make use of a random effects model within a frequentist framework. Estimates for all pairwise treatment effects will be obtained using standardised mean differences for continuous outcomes and risk ratios for dichotomous outcomes. Interventions will be ranked according to their relative efficacy. We will address the assumption of transitivity, heterogeneity and inconsistency using theoretical and statistical approaches. The possibility of publication bias and the strength of evidence will also be examined. Ethics and dissemination There are no ethical concerns. Results will be published in peer-reviewed journals and presented at practitioners’ conferences. PROSPERO registration number CRD42020148728. INTRODUCTION Schizophrenia and other psychotic disorders are among the most severe mental disorders and cause immense suffering for millions of people and their families worldwide. The most widely available treatment is antipsychotic drug therapy. However, because of unsatisfactory response, problems with adherence and disabling side-effects, 1 2 the focus has begun to switch more towards psychosocial interventions. Family interventions are one of two psychosocial therapies strongly recommended in recent clinical practice guidelines. [3][4][5] The relatives of people with psychotic disorders play an important role in the course of the disorder. High levels of expressed emotions (EE), that is, criticism, hostility and emotional overinvolvement expressed by the family, 6 were shown to be a reliable predictor of relapse in numerous studies. 7 Additionally, relatives are often the primary caregivers of patients with psychosis and are thus generally considered 'important in the process of assessment and engagement in treatment and also in the successful delivery of effective interventions and therapies for people with psychotic disorders' (British National Institute for Health and Care Excellence guidelines, p28). 4 As a result, attempts were made to positively influence the course of Strengths and limitations of this study ► Network meta-analysis will provide new information on the efficacy of different types of family interventions for psychotic disorders. ► It will enable comparisons and increase the precision of effect estimates. ► It will possibly result in a ranking of intervention types that can inform guidelines for clinical practice. ► Diversity of studies may result in heterogeneity or inconsistency, which will be addressed with theoretical and statistical approaches. ► Scarcity of studies for specific intervention types or for direct comparisons may reduce the connectivity of networks and limit the interpretation of results. Open access psychotic disorders by improving its management within the patients' families. However, caring for a person with a psychotic disorder is often a heavy burden and relatives face considerable emotional, social and economical challenges. [8][9][10] These stressful conditions are likely not only to affect the well-being of the relatives, but also to limit their long-term ability to support the patient. Therefore, reducing the burden of care and enhancing the wellbeing of carers has become an additional focus of family interventions. Extensive research has been done on family interventions' efficacy. Previous meta-analyses have found family interventions to substantially and consistently reduce relapse and rehospitalisation rates at follow-up assessments (risk ratios (RR) ≈ 0.5-0.8). 4 11-13 Symptom severity and social functioning also show slight improvements, although the evidence base is less solid (standardised mean differences (SMD) ≈ 0.3-0.4). 4 13 Regarding caregivers' outcomes, the most profound finding seems to be a significantly reduced amount of high-EE families and a reduced burden of care. 13 14 One well-known limiting factor of the evidence base and the delivering of efficacious family interventions is their great variability regarding content, aims and format. 3 12 15 Pharoah et al 13 17 ), from training of communication and problem-solving skills (eg, the Behavioural Family Therapy 18 ) to mutual support groups, 19 and from a few sessions to years of support. Despite the variety in the content of interventions, we still know little about whether some approaches are more efficacious than others. Separate meta-analyses have been conducted on specific types of interventions, such as psychoeducation, psychoeducation plus skills training and systemic therapy. [20][21][22][23] All of them found evidence for the specific interventions' efficacy, although in regard to different outcomes and follow-up periods. One metaanalysis differentiated between cognitive-behavioural, purely behavioural and 'pragmatic' family interventions and found no difference in regard to relapse. 24 For recentonset psychosis, a meta-analysis found mutual support to be more effective than psychoeducation in improving family functioning one to two years after the intervention. 14 However, this analysis was only based on two studies that had directly compared these approaches. In a systematic review of randomised controlled trials (RCT) on outcomes for relatives of people with psychotic disorders, none of the content components or types assigned to the studies (eg, psychoeducation only, psychoeducation plus mutual support, psychoeducation plus skills training) reliably distinguished effective from ineffective interventions. 15 Similarly, we are still in the dark about the ideal format of a family intervention. Some meta-analyses have calculated effects for subgroups of studies with specific formats. These indicate that more extensive interventions are more successful in reducing relapse 4 12 and demonstrate that the positive evidence for family interventions is mainly based on interventions that include the patient. 4 Meta-analyses on studies that have directly compared interventions with a focus on single families versus multiple family groups did not find them to differ in terms of relapse, but suggest that working with single families is better accepted than group settings involving multiple families. 4 13 The existing meta-analytic approaches have several limitations when it comes to evaluating the comparative efficacy of different family intervention types: 1. The differentiation of intervention types is unclear. In addition to unclear descriptions in primary studies, 15 many interventions comprise a variety of different components. The definitions used in existing meta-analyses and reviews are diverse and often imprecise. 2. There are only few studies that directly compare different types of interventions and these have rarely been meta-analysed. Most of the meta-analytic evidence is based on comparisons of a single intervention type to a non-intervention control condition. 3. There is a wide variety of outcome measures and time points. The specificity of previous meta-analyses and the diversity of findings make it difficult to compare the results and to draw reliable conclusions. 4. For most of the intervention types the number of studies is small. For example, many subgroup meta-analyses for specific formats are based on less than five studies. 4 13 This results in imprecise effect size estimates and limits the interpretation of significance. 5. The differential effects of some important characteristics of family interventions (eg, media-based vs face-to-face formats) have not yet been investigated meta-analytically. On this background, it is still widely unclear whether there are significant differences in efficacy between different types of family interventions. This study aims to compare the types of family interventions in a systematic and consistent manner, using the method of network meta-analysis (NMA). NMA extends on classical metaanalysis because it allows to simultaneously compare multiple intervention types and to combine the evidence of direct comparisons with indirect comparisons to common control conditions. 25 26 This enables us to evaluate the comparative efficacy of intervention types that have not yet been directly compared. Due to the inclusion of additional information from indirect comparisons, the efficacy can also be estimated more precisely. 27 Finally, all intervention types can be ranked according to their relative efficacy. This information could be of high value for patients, their families as well as healthcare providers and may serve as a guideline for delivering the most efficacious interventions. Open access Objectives Using NMA, this study aims to compare the efficacy of different types of family interventions for people with psychotic disorders and their relatives. Interventions will be differentiated by content (eg, psychoeducation, mutual support, skills training) and format (eg, number of sessions, inclusion of patients, form of delivery). METHODS AND ANALYSIS Methods for this NMA are based on the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P), 28 the PRISMA extension statement for reporting of systematic reviews incorporating NMA of healthcare interventions 29 and the chapter of the Cochrane Handbook on undertaking NMA. 25 The NMA has been registered in the PROSPERO database (CRD42020148728); the record will be updated with any changes made to the protocol. Eligibility criteria To include a sufficient number of studies for the specific direct and indirect comparisons and to increase the connectivity of the networks, some of the eligibility criteria are more lenient than in traditional meta-analysis. 30 Population Eligible studies will have to deal with two populations: 1. People with a diagnosis of a psychotic disorder, defined as schizophrenia, schizoaffective disorder, schizophreniform disorder, brief psychotic disorder or delusional disorder. In order to meet the diversity of clinical practice, 31 all kinds of definitions and diagnostic procedures as well as all kinds of comorbid disorders will be accepted. Populations with subthreshold psychotic symptoms (eg, high-risk populations) will be excluded. There will be no restrictions regarding age as family interventions are also common in younger people/adolescents (eg, first episode of psychosis). Samples also including people without psychotic but with other diagnoses are eligible if most of the patients are diagnosed with a psychotic disorder. 'Most' is defined as a minimum of 75% to ensure that the included studies allow drawing conclusions about the population of interest and in accordance with a Cochrane meta-analysis on family interventions for schizophrenia. 13 If only broad categories of psychotic disorders in major classification systems, such as 'Schizophrenia and other psychotic disorders' in the Diagnostic and Statistical Manual of Mental Disorders IV 32 or 'Schizophrenia, schizotypal and delusional disorders' in the International Statistical Classification of Diseases and Related Health Problems: 10th revision, 33 are used to describe the diagnoses, we assume that at least 75% are psychotic disorders as defined above. 2. The relatives, spouses/partners, friends or nonprofessional carers of the people with psychotic disorders, as defined by the study. They will have to be of an adult age, minor children of people with psychotic disorders will be excluded. Intervention Family interventions, defined as any psychosocial intervention directed at the relatives, spouses/partners, friends or non-professional carers of people with a psychotic disorder, will be included. If the intervention also comprises treatment elements for patients only, a substantial part of the intervention as a whole (ie, at least 50%, excluding treatment as usual or comparator elements) has to be directed at a relative, spouse/partner, friend or non-professional carer. The exact types and definitions of interventions to be compared in the analyses will depend on the distinct interventions and comparisons realised in the included studies and may be adjusted, amended or lumped. The following categories were created in consideration of the classifications resulting from previous review articles 3 12 15 34 and after pilot screening of about 25 primary study articles. These articles were identified as key publications in reviews and on the basis of the current state of research. AL read the full texts of the publications; the study designs were then discussed in the research team. Content of the intervention: If different types of family interventions are compared in a study, they will have to differ in at least one of these categories because otherwise their comparison could not be included in the network. Comparator Any kind of comparator will be included. The following classification of the comparator's content is also preliminary and may be adjusted through the process of study selection. So far, it includes no treatment versus treatment as usual (as defined by the study) versus psychosocial Open access intervention for patients only (eg, psychoeducation, cognitive-behavioural therapy, social skills training). Study type RCTs will be included, defined according to the Cochrane Handbook as trials in which 'the author(s) state explicitly (usually by some variant of the term 'random' to describe the allocation procedure used) that the groups compared in the trial were established by random allocation'. 35 Quasi-RCTs, cluster RCTs and cross-over trials will be excluded. Outcomes The outcomes were selected and defined in consideration of previous reviews, pilot screening of study articles and statistical properties. However, here too, the definitions may be refined during study selection and data extraction based on the specific measures used in the included studies. The most commonly reported measures of family interventions' efficacy will serve as main outcomes. Main outcomes For people with psychotic disorders ► Global clinical state, including the occurrence of relapse, hospitalisation, crisis care service use or remission/recovery, as defined by the study. For relatives/carers ► Coping and attitudes, including endpoint scores in rating scales or questionnaires concerning EE, communicative or problem-solving skills, coping with stress and negative affect, understanding of the patient's feelings, knowledge of psychosis or positive beliefs and attitudes towards the disorder. Additional outcomes For people with psychotic disorders ► Severity of symptoms, including endpoint scores in rating scales or questionnaires concerning psychotic symptoms or overall mental health. ► Functioning, including any quantification of global, social, occupational functioning or living skills. For relatives/carers ► Burden, including endpoint scores in rating scales or questionnaires concerning subjective or objective burden of care, emotional response (eg, stress, anxiety, depression), mental health, well-being or quality of life. ► Compliance/drop-out, including number of relatives/ carers leaving the study early for any reason since randomisation. Further selection criteria The measures within the outcome categories were sorted according to how well they represent the category's heading, that is, the concept of interest, and how common they are. Only established scales and subscales of questionnaires and rating scales, for which reliability and validity have been examined, will be included. Broad questionnaires or rating scales that cover further domains than those of interest will be included if the main focus of the questionnaire/scale (defined by at least 75% of the questions or subscales) pertains to the domain of interest. To be eligible, studies will have to report numerical outcome data for the calculation of SMDs for continuous outcomes and RRs for dichotomous outcomes. Change scores will be excluded because the combined analysis with endpoint scores is not recommended when using SMDs. 36 If a study reports multiple eligible outcome measures with sufficient information for effect size calculation, the measure for inclusion will be selected based on the following criteria, which were developed to maximise validity and homogeneity of study outcomes: 1. For multiple measures within the same outcome category, measures will be preferred according to the order they are listed above (eg, for global clinical state, our first preference will be measures of relapse, if these are not reported, we will include measures of hospitalisation, then measures of crisis care service use and then measures indicating the extent of remission or recovery). 2. For multiple time points, outcomes at one year (after the beginning of intervention/baseline assessment) or those closest to this time point will be preferred. 3. For multiple methods of assessment (eg, different scales), the most commonly reported will be determined among the included studies and this will be preferred for inclusion. In cases in which more than one outcome remains, the one with the best psychometric properties will be preferred. 4. For multiple samples, the one with the most comprehensive and adequate outcomes or-if outcomes are equivalent-the largest N will be preferred. The criteria may be adjusted if this is required based on the included studies. If no single outcome measure can be determined based on these criteria, a composite effect will be calculated (eg, for separate scales of positive and negative psychotic symptoms). The procedure of calculating a composite effect will follow the proposals made by Borenstein et al. 37 Report characteristics Reports will have to include an English abstract to evaluate eligibility in a first step. Conference/congress abstracts and trial registrations without results will be excluded as these reports do not present sufficient data for the assessment of eligibility, especially regarding outcomes. However, we will try to identify more comprehensive publications for relevant trials (see the Searches section). Foreign language articles will be translated. If there are multiple reports referring to the same study of which at least one presents sufficient information for our analyses in English, the reports in other languages will not be considered due to limited resources. Searches The bibliographic databases Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE(R), Ovid PsycINFO and Cumulative Index to Nursing & Allied Health Literature (CINAHL) were searched on 8 August 2019. There was no limit for year of publication, except that Ovid MEDLINE(R) (and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Daily and Versions(R)) was only searched for studies published since 2005. This is recommended in the Cochrane Handbook 35 to supplement but not duplicate the search for RCTs by CENTRAL. The search strategies were created using databasespecific subject headings and a variety of free-text terms reflecting the key concepts of interest. To limit results to RCTs, we developed search terms according to the 'Cochrane Highly Sensitive Search Strategy for identifying randomized trials in MEDLINE: sensitivity-and precision-maximizing version (2008 revision)', 35 supplemented with various free-text terms from CENTRAL's most recent search strategy for identification of RCTs in Embase (https://www. cochranelibrary. com/ central/ central-creation). The search strategies are included in the online supplemental material 1. Additionally, the reference lists of the most relevant, recent and comprehensive review articles were screened to identify articles missed by the computerised search. Reviews were identified by the above searches and by searching the Cochrane Database of Systematic Reviews. We also try to identify full reports of relevant trial registrations, study protocols and congress abstracts as well as cross-references in primary articles. Study selection Bibliographic data of all articles retrieved by the search will be imported into the reference management software ZOTERO. They will be deduplicated using the tool of the Systematic Review Accelerator by the Centre for Research in Evidence-Based Practice (https:// sr-accelerator. com/#/), which was found to have good sensitivity and specificity, 38 and additional manual screening. Titles and abstracts of the remaining articles and the additional articles identified from references will be screened by AL to identify studies that potentially meet the inclusion criteria. Of those, the full texts will be consulted. We will try to identify overlapping samples and multiple reports of the same study by comparing, for example, author names, study sites and years, sample sizes, demographic and clinical data, treatment descriptions and results. Eligibility of each study will be assessed independently by two review team members and documented in an adapted version of the Cochrane data collection form (intervention reviews-RCTs only). Discrepancies will be identified and a consensus will be formed by discussion, including a third review team member where necessary. In the case of major unclarities, we will attempt to obtain additional information from previous reviews that included the study or from the study authors if there is an accurate email address available. Data extraction A data collection form and tool will be created and pilot tested in consideration of the recommendations in chapter 5.3 and 5.4 of the Cochrane Handbook. 39 Two review team members will extract data independently; discrepancies will be resolved through discussion, including the third review team member where necessary. In the case of missing data or unclear study information, we will again consult previous review articles or-if an accurate email address is available-ask the study authors to provide the relevant information on intervention characteristics and outcome data. Extracted information will include: 1. Characteristics of the study: year, country. Risk-of-bias assessment Risk of bias in individual studies will be assessed using the Version 2 of the Cochrane risk-of-bias tool for randomised trials (RoB 2). 40 This recently developed tool uses signalling questions and an algorithm to help making judgements of 'low risk', 'some concerns' or 'high risk' related to five domains: 1. Bias arising from the randomisation process. 2. Bias due to deviations from intended interventions. 3. Bias due to missing outcome data. 4. Bias in measurement of the outcome. 5. Bias in selection of the reported result. Open access Additionally, an overall risk of bias will be determined by the tool's algorithm. The assessment will be done for each of the trial's outcomes and with respect to the assignment to the intervention (intention-to-treat effect). Two trained review team members will independently answer the signalling questions using the available Excel form. Disagreements will be identified with the discrepancy check function and will be resolved by discussion, involving the third review team member where necessary. As high risks of bias may result in an overestimation of family interventions' efficacy, studies with an overall high risk will be excluded from sensitivity analyses. Data synthesis An NMA will be conducted for each of the planned outcomes and intervention characteristics for which sufficient and appropriate data are available. The R package netmeta will be used. Analyses will apply a random effects model within a frequentist framework. 41 We will assume a constant heterogeneity across the comparisons within a network. A network plot will be used to display the intervention types (as nodes) and the quantity of studies for all possible treatment comparisons (as connecting lines). Estimates for all pairwise treatment effects will be obtained using SMDs for continuous outcomes and RRs for dichotomous outcomes, both with their 95% confidence interval. The effects will be displayed in a league table. Interventions will be ranked using P-scores, a frequentist analogue to the Surface Under the Cumulative Ranking curve (SUCRA). 42 Some interventions may have several distinct content components. If there are appropriate data, the influence of the individual components will be evaluated in an additive NMA, assuming that the effect of an intervention with several components is the sum of the effects of its individual components. 43 Diversity and transitivity Diversity of the included populations, interventions and outcomes may lead to statistical heterogeneity. This may also threaten the assumption of transitivity, that implies that the effect of treatment A versus B can be indirectly determined via a common treatment C. For this assumption to hold, trials that investigate different types of interventions need to be comparable in regard to clinical or methodological variables that may influence the treatment effect. However, the intervention types we plan to investigate may not be independent of each other. For example, psychoeducative interventions may have fewer sessions than other content types, which would threaten the transitivity of the NMA for content and number of sessions. Other potential effect modifiers are participant or outcome characteristics. For example, the type of family intervention might differ depending on the severity of the relatives' dysfunction/distress, which was found to be associated with the efficacy of interventions, 15 or interventions with more sessions may report on outcomes with longer follow-up periods. To evaluate the likelihood of transitivity, the distribution of potential effect modifiers will be compared between trials of different intervention types and investigated for similarity. 25 44 45 Based on Cochrane recommendations, 36 clinical hypotheses and empirical evidence, 4 Assessment of heterogeneity and inconsistency Transitivity will also be addressed by assessing the consistency of direct and indirect evidence, 30 with both local and global methods. 25 44 Locally, the inconsistency of a specific treatment comparison will be evaluated by splitting the network estimate into the contribution of direct and indirect evidence and checking for agreement. Global heterogeneity of the network will be assessed using a generalised Q-and I 2 -statistic. 41 A decomposition into Q-statistics for heterogeneity within designs (ie, studies with the same treatment comparisons) and between designs will help to identify sources of heterogeneity and to evaluate the inconsistency of the network as a whole. Metabias and strength of evidence assessment We will examine the possibility of publication bias by non-statistical considerations 46 and calculation of 'comparison-adjusted' funnel plots to assess funnel plot asymmetry. 47 The strength of evidence for the estimates of the main outcomes will be evaluated according to the proposals made by Salanti et al, 46 who adapted the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework specifically for NMA. This extensive approach may be adapted to the data and resources available. Additional analyses In sensitivity analyses, studies with an overall high risk of bias will be excluded. If there is evidence that the assumption of transitivity is threatened within a network (see the Diversity and transitivity section), that is, if there are significant differences between the distributions of potential effect modifiers, we will try to increase consistency and homogeneity by performing subgroup NMAs for the distinct manifestations of the potential effect modifiers. For example, regarding the number of sessions, Open access there may be separate analyses for studies with a focus on psychoeducation and studies with other foci. Patient and public involvement Patients or the public will not be involved in the design or conduct of the study. However, the Empower Peers to Research (EmPeeRie) Now group at the UKE (Outpatient Clinic at the University of Hamburg) will consult on issues concerning reporting, interpretation and dissemination of findings. The EmPeeRie Now group consists of members with lived experience of mental disorders, with multiple members having lived experience in psychosis. ETHIC AND DISSEMINATION There are no ethical issues apparent. Results will be published in peer-reviewed journals and be presented at practitioners' congresses. Full data will be made available on request. Contributors Both authors designed this study in close cooperation. AL drafted the protocol and final manuscript. TML revised and approved it. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
2021-01-22T06:15:53.349Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5297a37e486c075428ff8a89f6c8266fe1c83961", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/1/e039777.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a0e31d6f2d4e3e7f5519dcef85c6fafcbdd66ed", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32366986
pes2o/s2orc
v3-fos-license
Does grasping capacity influence object size estimates? It depends on the context Linkenauger, Witt, and Proffitt (Journal of Experimental Psychology: Human Perception and Performance, 37(5), 1432–1441, 2011, Experiment 2) reported that right-handers estimated objects as smaller if they intended to grasp them in their right rather than their left hand. Based on the action-specific account, they argued that this scaling effect occurred because participants believed their right hand could grasp larger objects. However, Collier and Lawson (Journal of Experimental Psychology: Human Perception and Performance, 43(4), 749–769, 2017) failed to replicate this effect. Here, we investigated whether this discrepancy in results arose from demand characteristics. We investigated two forms of demand characteristics: altering responses following conscious hypothesis guessing (Experiments 1 and 2), and subtle influences of the experimental context (Experiment 3). We found no scaling effects when participants were given instructions which implied the expected outcome of the experiment (Experiment 1), but they were obtained when we used unrealistically explicit instructions which gave the exact prediction made by the action-specific account (Experiment 2). Scaling effects were also found using a context in which grasping capacity could seem relevant for size estimation (by asking participants about the perceived graspability of an object immediately before asking about its size on every trial, as was done in Linkenauger et al., 2011; Experiment 2). These results suggest that demand characteristics due to context effects could explain the scaling effects reported in Experiment 2 of Linkenauger et al. (2011), rather than either hypothesis guessing, or, as proposed by the action-specific account, a change in the perceived size of objects. Electronic supplementary material The online version of this article (doi:10.3758/s13414-017-1344-3) contains supplementary material, which is available to authorized users. The term action capacity refers to our ability to successfully perform actions. It is restricted by the morphology and capabilities of our bodies (Adolph & Berger, 2006;Proffitt & Linkenauger, 2013). Given the tight coupling between perception and action (Clark, 1999;Gibson, 1979;Warren, 1984), it has been suggested that action capacity can directly influence visual perception (Proffitt & Linkenauger, 2013;Witt, 2011Witt, , 2016. Specifically, the action-specific account of perception suggests that our perception of the spatial properties of the environment scales according to our action capacity (Proffitt, 2006;Proffitt & Linkenauger, 2013). For example, reaching with a tool that increases maximum reach can influence the estimated distance to a target (Witt, Proffitt & Epstein, 2005). Witt et al. (2005) found that targets which were out of reach without the tool were estimated as closer after reaching to them with the tool. Action-specific scaling effects suggest that perception may be cognitively penetrable-so perception can be directly influenced by higher-level cognition. If action-specific scaling effects truly reflect changes in what is perceived in this strong sense, then this has major implications for standard, modular theories of vision, which hold that perception is encapsulated and separate from cognition (Pylyshyn, 1999; for a recent review, see Firestone & Scholl, 2015). However, a major Electronic supplementary material The online version of this article (doi:10.3758/s13414-017-1344-3) contains supplementary material, which is available to authorized users. debate concerning the action-specific account is whether the observed scaling effects reflect judgement rather than perception (Collier & Lawson, 2017;Durgin et al., 2009;Durgin, Klein, Spiegel, Strawset & Williams, 2012;Firestone & Scholl, 2014;Zelaznik & Forney, 2016; for reviews, see Firestone, 2013;Firestone & Scholl, 2015;Philbeck & Witt, 2015;Proffitt & Linkenauger, 2013;Witt, 2011Witt, , 2016. Specifically, participants' responses may not reflect differences in what they actually perceive; rather, their spatial estimates may be affected by nonperceptual influences such as their beliefs about the purpose of the experiment. This possibility has been demonstrated experimentally. In a famous study supporting the action-specific account, hills were reported as steeper when observers wore a heavy backpack (Bhalla & Proffitt, 1999). However, Durgin et al. (2009) found that if participants were told that the backpack they wore contained equipment for monitoring their ankle muscles, their estimates of hill slant did not differ from participants who did not wear the backpack. This finding suggests that participants who were not given a reason for wearing the backpack may have deduced that the backpack was supposed to influence their estimates of hill slant and adjusted their responses accordingly. Similarly, Firestone and Scholl (2014) tested whether the finding that apertures were estimated as narrower when participants held a horizontal rod that was wider than their body (Stefanucci & Geuss, 2009) reflected a true perceptual change or demand characteristics. Firestone and Scholl (2014) found that when participants were given a convincing reason for holding the rod, their estimates of aperture width did not differ from participants who did not hold the rod. These results suggest that if participants are not given an explanation for a salient manipulation, they may attempt to figure out the experimental hypothesis and that this, in turn, can influence their responses. Together, the results of Durgin et al. (2009), Firestone and Scholl (2014; see also Woods, Philbeck & Danoff, 2009) suggest that demand characteristics could explain a number of action-specific scaling effects. Demand characteristics broadly refer to factors in an experimental setting which affect participants' responses (Orne, 1962). We will term the form of demand characteristics investigated by these authors hypothesis guessing, where participants try to work out the expected results of the experiment and consciously adjust their responses accordingly. Such demand characteristics cannot, though, easily explain all action-specific effects (for some recent reviews, see Philbeck & Witt, 2015;Witt, 2016). For example, Taylor-Covill and Eves (2016) found that overweight individuals estimated staircases as steeper than did healthy-weight individuals. These results are difficult to explain in terms of hypothesis guessing (see also Witt & Sugovic, 2013). Although participants probably knew their own weight, they were unlikely to intuit that this was expected to influence what they perceived spatially, particularly given that Taylor-Covill and Eves (2016) recorded the participant's weight only after they had made their estimates of slant. Another form of demand characteristics could, though, influence performance without participants necessarily realising it, namely context effects due to the experimental setting or procedure. For example, performing two tasks in quick succession could create a context which implies that the two tasks are related in some meaningful way. In an example from the action-specific literature, Linkenauger, Witt, and Proffitt (2011, Experiment 2) reported that objects to-be-grasped in the right hand were estimated as smaller than objects to-begrasped in the left hand. They claimed that this occurred because right-handers perceive their right hand as larger than their left hand, and so objects appear more graspable, and therefore smaller, when they intend to grasp them with their right hand. However, participants in Experiment 2 of Linkenauger et al. (2011) estimated both the graspability and size of objects on every trial. Asking participants about an object's graspability immediately before asking about its size may have created a context in which the two measures appeared related or became confused with each other. This could occur because the dimensions graspable-to-ungraspable and small-to-big are conceptually linked. This could lead participants to estimate easily graspable objects as smaller, even if the visual representation of the object is unchanged. This possibility is supported by evidence from the literature on crosssensory correspondences, whereby properties of one perceptual domain are linked to properties in another (e.g. Walker, 2012). For example, heavy objects are rated as darker than light objects (Walker, Scallon & Francis, 2016). 'Graspability' is not a perceptual feature like those studied in the cross-sensory correspondence literature. Nevertheless, a similar issue could have arisen in Experiment 2 of Linkenauger et al. (2011) if the experimental context implied a conceptual relationship between grasping capacity and object size. If so, then the results of Linkenauger et al. (2011, Experiment 2) could be explained by demand characteristics associated with performing two conceptually linked tasks on the same trial, as opposed to reflecting a change in what participants perceived in the strongest sense. Only the latter interpretation is consistent with the action-specific account. We recently failed to replicate Experiment 2 of Linkenauger et al. (2011). In addition to testing for an effect of hand dominance, as was done in the original study, we directly manipulated grasping capacity by taping together the fingers of one hand (Collier & Lawson, 2017). This powerful manipulation restricted both actual (by~1.2 cm) and perceived (by~3.2 cm) grasping capacity. According to the action-specific account, taping should have influenced estimates of object size. However, although participants appropriately estimated the grasping capacity of their taped hand as less than that of their untaped hand, objects grasped in the taped hand were not estimated as larger than objects grasped in the untaped hand. We did not resolve why we failed to replicate Experiment 2 of Linkenauger et al. (2011), but we suggested that this could have been due to reduced context effects in our studies. This was achieved in two ways. First, in our initial experiments, participants completed the size estimation task before starting the grasping capacity task, so their size estimates were unlikely to be biased by considering the graspability of the objects. Second, the design of our final experiment was similar to that of Linkenauger et al. (2011, Experiment 2) in that participants were explicitly told that we were interested in their grasping behaviour, and the grasping task immediately preceded the size estimation task on each trial. However, our instructions emphasised that the grasping task and the size estimation task were part of two unrelated experiments. In the present studies, we investigated whether we (Collier & Lawson, 2017) previously failed to replicate Experiment 2 of Linkenauger et al. (2011) because we reduced demand characteristics. In the present studies, participants had the fingers of one of their hands taped together, and we compared their estimates of object size for objects they had grasped in their taped versus their untaped hand. This taping manipulation has a number of advantages over the methods used by Linkenauger et al. (2011, Experiment 2). In their second experiment, Linkenauger et al. (2011) took advantage of the finding that right-handers perceive the grasping capacity of their right hand as greater than that of their left hand (Collier & Lawson, 2017;Linkenauger et al., 2011;Linkenauger, Witt, Stefanucci, Bakdash, & Proffitt, 2009). However, this only produces quite a small difference in perceived grasping capacity. Furthermore, there is no evidence for a difference in the actual grasping capacity of the right and left hands (Collier & Lawson, 2017;Linkenauger et al., 2011). In contrast, our taping manipulation alters both perceived and actual maximum grasp. In their final experiment, Linkenauger et al. (2011) manipulated perceived grasping capacity by magnifying the hand. However, as Linkenauger et al. (2011) themselves discuss (see also Witt, 2016), magnification could have induced a size-contrast illusion whereby objects may appear smaller next to a visually larger hand. It is therefore unclear whether the scaling effect they found in this experiment occurred because object size was scaled according to grasping capacity, or if it resulted from a size-contrast effect. In contrast, taping the hand directly reduces grasping capacity (Collier & Lawson, 2017) while minimising the possibility of inducing a sizecontrast illusion (for a discussion, see Collier & Lawson, manuscript in preparation). The action-specific account predicts that a change in grasping capacity due to taping the hand should influence perceived object size. Specifically, blocks grasped in the taped hand should be estimated as larger than blocks grasped in the untaped hand because the taped hand has a reduced grasping capacity. In Experiments 1 and 2, we tested whether previously reported effects of graspability on size estimates could instead be explained by hypothesis guessing by investigating whether participants were sensitive to demand characteristics arising from leading instructions. In Experiment 3, we examined the influence of demand characteristics due to context effects by having participants judge both how difficult a block was to grasp and its size on every trial. We expected that this would create a context which made grasping capacity seem relevant for estimating object size. Experiment 1 Experiment 1 was designed to test whether participants would figure out the predicted influence of taping on estimated object size from the instructions they were given and then change their estimates accordingly. We reasoned that, depending on their instructions, hypothesis guessing could lead to two opposite effects (see Fig. 1). First, participants could be led to believe that objects grasped in their taped hand should look larger because taping reduces both the perceived and the actual maximum size of objects that can be grasped (Collier & Lawson, 2017). Here, hypothesis guessing would produce an effect in the direction predicted by the action-specific account. Alternatively, participants could be led to believe that objects seen near to their taped hand should look smaller because Fig. 1 The predicted effects of instructions on perceived object size in Experiment 1. Left: Perceived object size decreases with a decrease in hand size due to taping (body size account). Right: Perceived object size increases with a decrease in perceived grasping capacity (action-specific account) taping the hand makes it look smaller by reducing the maximum spread of the fingers (see Fig. 2), and because the taped hand could be used to anchor size estimates. In the latter case, using leading instructions which imply the opposite effect to that predicted by the action-specific account provides a strong way to test whether the effect reported in Experiment 2 by Linkenauger et al. (2011) was the result of hypothesis guessing. If participants are sensitive to leading instructions in this task, then they would be expected to comply with their instructions regardless of the outcome they imply. We therefore tested both alternatives. In the action capacity group, the instructions implied that objects grasped by the taped hand should appear larger because the grasping capacity of the taped hand is reduced, consistent with the action-specific account. In the body-size group, the instructions implied that objects near to the taped hand should appear smaller because that hand appears smaller, and this could cause the object to be scaled down in size. In the third, objective-size group, the instructions did not suggest that taping would influence size estimation, and participants were explicitly told to ignore nonvisual factors when estimating object size. Here, taping was not expected to influence object size estimates due to hypothesis guessing. In Experiment 1 participants actually grasped each object they estimated the size of. In contrast, on each trial of Linkenauger et al. (2011, Experiment 2), participants only stated whether they thought they could grasp it. They did not grasp the blocks until the end of the experiment. Here, we tested actual grasping because we believe that the task used by Linkenauger et al. has low ecological validity. In everyday life, we often perform simple actions without explicitly attending to them (Goodale & Haffenden, 1998), whereas we rarely repeatedly decide whether we could act without actually acting. Also, action-specific scaling effects have been reported even when, as in our experiments, participants performed a relevant action without being explicitly asked if they could do it (e.g., Witt & Dorsch, 2009). Finally, Franchak and Adolph (2014) showed that participants only updated their perceived action capacity following a change to their body after they had actually performed the relevant action. This suggests that, for our taping manipulation to be effective, participants needed to try to grasp the objects with their taped hand. In Experiment 1, we tested whether participants were sensitive to leading instructions which implied the desired experimental outcome. On each trial, participants first grasped and moved a block with either their taped hand or untaped hand, then placed that block next to a laptop. They then used the same hand to adjust the horizontal gap between two lines on the laptop screen to match the perceived width of the block they had just moved. If hypothesis guessing influences performance then we predicted that, relative to objects moved by the untaped hand, objects moved by the taped hand should be estimated as larger in the action capacity group, smaller in the bodysize group and the same size in the objective-size group, see Fig. 1. Method Ethical approval was granted for all of the experiments presented in this study by the relevant local ethics committee at the University of Liverpool. Participants Fifty-four participants (mean age = 18.7 years, seven males, n = 18 per group) were recruited for this study. Participants all self-reported as right-handed, and either volunteered or were rewarded with course credit for their time. Design Participants were allocated to one of three instruction groups (action capacity/objective size/body size). Throughout the experiment, participants had the fingers of one of their hands taped together. Half of the participants in each instruction group had their left hand taped (LHTaped group) and the remaining half had their right hand taped (RHTaped group). The middle and ring fingers were first taped together above the proximal interphalangeal (middle) finger joint, then all four fingers were taped together just underneath the same joint, see Fig. 2. Apparatus, stimuli, and procedure All participants received the following, general verbal instructions: BIn this experiment we will ask you to estimate the size of square stimuli. There are many possible interpretations of this instruction, so we want to make it clear what it is we want you to estimate. Imagine standing at one end of a road and looking at a house at the other end-the house may appear closer or farther away than it really is, depending on a variety of factors. For example, if you are very tired, hungry or in a rush, the distance to the house may appear greater than it really is. In contrast, if you are feeling very energetic, the distance to the house may appear shorter than it really is. These nonvisual factors have been previously suggested to influence spatial perception. The same logic applies to objects we can act on in our nearby environment. For example, if you are looking at a mug on a table, there may be things in the environment that make it visually appear closer to or further away than its actual physical distance from you (that is, the distance measured by a tape measure).F ollowing this, they received group-specific instructions. The sentences highlighted in bold differed across the groups: Action capacity group BSimilarly, this logic can be applied to the size of objects that we act on. For example, being able to grasp bigger objects may affect our perception of the size of objects we intend to grasp. In this experiment, we will tape together the fingers of one of your hands. This is to restrict the grasping capacity of one of your hands. You will then be presented with a series of square stimuli and asked to visually match their width on a screen. You will be asked to put either your left or right hand through the curtain to pick up the stimulus, take it out from behind the curtain, and place it on the table in front of you. Use the same hand you picked up the stimulus with to use the arrow keys to move the lines on the screen apart and visually match the width of the stimulus on the screen. Base your answer on what size you feel the object is, taking all relevant nonvisual factors into account, including whether having your fingers taped together makes it harder for you to grasp big objects.B ody-size group BSimilarly, this logic can be applied to the size of objects that we act on. For example, thinking that our hand has decreased in size may affect our perception of the size of objects which we see near or hold in our hand. In this experiment, we will tape together the fingers of one of your hands. This is to simulate a shrinkage in the size of that hand. You will then be presented with a series of square stimuli and asked to visually match their width on a screen. You will be asked to put either your right or left hand through the curtain to pick up the stimulus, take it out from behind the curtain and place it on the table in front of you. Use the same hand you picked up the stimulus with to use the arrow keys to move the lines on the screen apart and visually match the width of the stimulus on the screen. Base your answer on what size you feel the object is, taking all relevant nonvisual factors into account, including whether having your fingers taped together makes your hand feel smaller.Ô bjective-size group BSimilarly, this logic can be applied to the size of objects that we act on. However, if during this task you think that the objects appear to be different in size than how big you think they really are-for whatever reason-ignore these things and base your estimation only on how big you think the object really is. In this experiment, we will tape together the fingers of one of your hands. You will then be presented with a series of square stimuli and asked to visually match their width on a screen. You will be asked to put either your left or right hand through the curtain to pick up the stimulus, take it out from behind the curtain, and place it on the table in front of you. Use the same hand you picked up the stimulus with to use the arrow keys to move the lines on the screen apart and visually match the width of the stimulus on the screen. Base your answer only on how big you think the object really is-imagine there's a tape measure stretched across the object and you're reading off its size. fter being given their instructions, participants completed a visual size-matching task. The stimuli were 10 foamboard blocks (0.5 cm thick). The blocks were square with sides ranging in size from 4 cm to 13 cm in 1-cm increments. In previous work (Collier & Lawson, 2017), this range was found to be graspable for most participants, even when their hand was taped. We only used graspable blocks because, according to the action-specific account, scaling effects are only expected if the relevant action is actually performable (Linkenauger et al., 2011). On each trial, one block was presented on a table behind a curtain. A laptop (screen diagonal = 25 cm) was placed in front of the curtain. Two black lines (0.2 cm × 1.3 cm) were displayed on the screen. The lines were initially 0.9 cm apart. The participant reached behind the curtain to grasp and pick up the block (see Fig. 3a). The experimenter told the participant which hand they should use on each trial. The participant then moved the block onto the table in front of the curtain on the same side of the laptop as the hand they picked it up with (see Fig. 3b). Participants were instructed to always first try to grasp the block with their thumb on one side and any other finger on the opposing side (see Fig. 3a). If the block was too big to grasp in this way, they were then allowed to pick it up and move it in any way they wished. To maximise the likelihood of participants using the hand they had just acted with as a perceptual ruler, they pressed the response keys with the same hand they had just used to grasp the block and they kept their other hand out of sight, by their side. This ensured that they only saw the action-relevant hand while making their response. After responding, they used the same hand to place the block back behind the curtain. The experimenter then replaced the block with another block and the next trial began. Before starting the experimental trials, all participants were given two practice trials which used the smallest (4 cm) and largest (13 cm) blocks. The 4-cm block was presented to their untaped hand, and the 13-cm block was presented to their taped hand. This was to try to highlight the difference in grasping capacity following taping. During the experimental trials, participants estimated the size of each block once for each hand, giving 20 experimental trials in total (10 blocks × 2 hands). Trials were presented in a different, random order for each participant. To minimise forgetting, participants were reminded of their group specific instructions after 10 trials. Specifically, the action capacity group was told to consider their grasping capacity, the bodysize group was told to consider whether taping made their hand feel smaller, and the objective-size group was told to ignore all nonvisual factors while making their estimates. After completing the size-estimation task, participants drew around their hands with their thumb and fingers spread as far apart as possible. They first drew around their taped hand (still taped), then their taped hand (with tape removed), and finally their untaped hand. They then completed a questionnaire on a computer. This asked what they believed the main manipulations of the experiment were, and whether they believed that their responses were influenced by having their fingers taped together and the experimental instructions. After this, the experimenter asked participants specifically whether they thought that having their fingers taped together had made objects appear bigger, smaller, or about the same size in their taped hand relative to their untaped hand. The entire procedure took about 20 minutes. Object size estimation task We excluded six trials where the participant was unable to grasp the block in the manner specified using their taped hand (one 12cm trial and five 13-cm trials) plus the six corresponding trials for that participant for their untaped hand. In addition, a further 16 trials were excluded due to invalid responses (e.g. pressing the Enter key without adjusting the distance between the lines). To test whether size estimates differed for taped versus untaped hands, we calculated perceived block size as a proportion of actual block size, then averaged these proportions for all block sizes tested for a given participant. These ratios were used as the dependent variable in a mixed ANOVA 1 with Taping (taped/ untaped) as a within-participants factor and Instruction Group (action capacity/objective size/body size) and Tape Group (LHTaped/RHTaped) as between-participants factors. There were no significant effects. For the main effects: Taping, F(1, 48) = 0.416, p = .5, η p 2 = .01, Instruction Group, F (2, 48) = 0.754, p = .5, η p 2 = .03, and Tape Group, F(1, 48) = 2.136, p = .2, η p 2 = .04. For the interactions: Taping × Instruction Group, F(2, 48) = 0.517 , p = .5, η p 2 = .02, Taping × Tape Group, F(1, 48) = 0.037, p = .8, η p 2 = .001, Instruction Group × Tape Group, F(2, 48) =1.817, p =.2, η p 2 =.07, and Taping × Instruction Group × Tape Group, F(2, 48) = 0.309, p = .7, η p 2 = .01, see Fig. 4. We also checked whether participants estimated block size in a way that was consistent with their beliefs about their own biases on this task, based on their postexperiment responses. To do this we analysed size estimates only for participants who chose the action-specific prediction (collapsing over instruction group and tape group, n = 26, see Table 4). If their post hoc beliefs were consistent with their experimental 1 For each experiment reported here we also tested for the original effect reported by Linkenauger et al. (2011) that objects grasped by the right hand would be estimated as smaller than those grasped by the left hand. In Experiment 3, a mixed ANOVA where grasping hand (left/right) was a within-participants factor and tape group (LHTaped/RHTaped) was a betweensubjects factor was conducted. There were no significant main effects: grasping hand, F(1, 16) = 1.208, p = .3, η p 2 = .07, and tape group, F(1, 16) = 0.771, p = .4, η p 2 = .05. There was a significant Grasping Hand × Tape Group interaction, F(1, 16) = 4.936, p = .041, η p 2 = .24. Bonferroni-corrected pairwise comparisons showed that for the LHTaped group estimates for the left hand were greater than for the right hand (mean difference = 0.26, p = .032) but for the RHTaped group there was no difference between estimates for the left and right hands (mean difference = -0.009, p = .4). responses then they should have estimated blocks as larger for their taped hand. However, a paired-samples t test for this subgroup revealed no difference between their size estimates for their taped and untaped hand, t(25) = 1.419, p = .2. We ran Bayesian analyses to test the strength of evidence for the null effects revealed by the ANOVA (see Table 1). We used the procedure described by Masson (2011), which determines the posterior probabilities for both the null and alternative hypothesis based on the Type III sum of squares values for the effect. This method can provide confidence that a null effect is not simply the result of a Type II error. We used the descriptive terms for strength of evidence suggested by Raftery (1995). Hand span as an estimate of action capacity We used participants' drawings around their outspread fingers to estimate their maximum hand span to check whether this was reduced by taping. A mixed ANOVA was conducted with Hand (still-taped/was-taped-but-tape-removed/untaped) as a withinparticipants factor and Tape Group (LHTaped / RHTaped) as a between-participants factor. Hand was significant, F(2, 104) = 212.766, p < .001, η p 2 = .80. Maximum hand span was lower for the still-taped hand than for either the hand that was taped but with tape removed or the untaped hand (see Table 2). There was no effect of Tape Group, F(1, 52) = 1.012, p = .3, η p 2 = .02, or a Hand × Tape Group interaction, F(2, 104) = 0.026, p = .9, η p 2 Fig. 3 Trial procedure in Experiment 1 for an untaped right hand trial (the procedure was identical for the taped hand). a The participant has reached behind the curtain with their right hand to grasp and move the block (size shown here = 13 cm). The inset shows that the participant has successfully grasped the block using the specified grasp-the thumb on one side and any other finger on the opposite side. b The participant has moved the block to the right side of the laptop and placed it flat on the table. They are using their right hand to move the lines on the screen to visually match the width of the block. The experimental procedure was identical in Experiment 2. The experimental procedure was identical in Experiment 3, except that participants verbally rated how difficult the block had been to grasp before visually matching its size on the screen Atten Percept Psychophys (2017) Postexperiment questions The number of participants across all groups in Experiments 1-3 who agreed that taping or instructions influenced their estimates of object size (Questions 6 and 8 in the questionnaire) is given in Table 3. The number of participants who responded that that objects appeared bigger, the same size, or smaller for trials using their taped relative to their untaped hand (asked verbally by the experimenter at the end of the experiment) is given in Table 4. Detailed responses to further open-ended questions can be found in the supplementary material. Discussion We did not find scaling effects on object size estimates as would be predicted by the action-specific account. In addition, estimates of object size did not differ between the taped and untaped hands in any of the three groups, so participants were not sensitive to leading instructions. We therefore found no evidence that differences in demand characteristics due to hypothesis guessing could explain why Collier and Lawson (2017) failed to replicate Linkenauger et al. (2011, Experiment 2). We re-examined this issue in Experiment 2. Experiment 2 In Experiment 1, the instructions given to the three groups may not have been sufficiently explicit to influence performance. For example, although the instructions for the action capacity group implied that grasping capacity might matter for size estimation, the expected direction of its effect still had to be inferred by participants. In Experiment 2, we investigated whether hypothesis guessing could influence performance if we directly told participants the results that we expected to obtain. We adapted the instructions from the action capacity group in Experiment 1 to explicitly tell participants that their estimates of object size were expected to be greater for their taped hand than for their untaped hand. Participants Eighteen participants (mean age = 18.5 years, zero male, mean Edinburgh Handedness Inventory score = 87.5, range: 50-100) were recruited for this study. Participants all selfreported as right-handed, and either volunteered or were rewarded with course credit for their time. Apparatus, stimuli, and procedure The apparatus, stimuli, and procedure were identical to that in Experiment 1, except for the following changes. First, participant's fingers were taped before the instructions were read to them. This was to maximise the likelihood that, as they were given their instructions, participants would consider the relationship between grasping capacity and perceived object size that was being described to them. Second, only one set of instructions was used, which was adapted from the action capacity group of Experiment 1, as follows. Action capacity-direction-specified group BIn this experiment we will ask you to estimate the size of square stimuli. There are many possible interpretations of this instruction, so we want to make it clear what it is we want you to estimate. Imagine looking at a mug on a table: There may be things in the environment that make it visually appear closer to or further away than its actual physical distance from you-that is, the distance measured by a tape measure. For example, if it appears difficult to reach, you may perceive the distance to the mug as greater than it really is. Similarly, this logic can be applied to the size of objects that we act on. For example, the same thing might happen when we estimate the size of objects that we are going to pick up. In this experiment, we have taped together the fingers of one of your hands whilst your other hand has not been taped. Previous research has suggested that taping your hand makes it harder to pick up objects and that this makes objects grasped in or seen near to your taped hand appear bigger to you. Basically, because we are clumsier when our hand is taped, objects we might pick up with it appear larger to us so that we are more careful when picking them up. In this experiment you will be asked to estimate the size of objects that you have just picked up with either your taped hand or your untaped hand. Take all relevant nonvisual factors into account when you estimate object size, including whether having your fingers taped together makes the objects appear bigger compared to your untaped hand.P articipants were reminded that they should consider whether the blocks appeared larger in their taped hand after 10 trials, and the entire procedure lasted around 20 minutes. Object-size estimation task We excluded four trials due to invalid responses (e.g. pressing the Enter key without adjusting the distance between the lines). We calculated perceived block size ratios as in Experiment 1. These ratios were the dependent variable in a mixed ANOVA with Taping (taped/untaped) as a within-participants factor and Tape Group (LHTaped/RHTaped) as a between-participants factor. Taped hand estimates (M = 0.83, SE = 0.03) were greater than untaped hand estimates (M = 0.78, SE = 0.03), F(1, 16) = 7.282, p = . 016, η p 2 = .31 (see Fig. 4). There was also a Table 3 The number (and %) of participants in each group in Experiments 1, 2, and 3 who agreed in a postexperiment questionnaire that taping or instructions influenced their estimates of object size Table 5). Hand span as an estimate of action capacity We used participants' drawings around their outspread fingers to estimate their maximum hand span to check whether this was reduced by taping. A mixed ANOVA was conducted with Hand (still-taped/was-taped-but-tape-removed/untaped) as a withinparticipants factor and Tape Group (LHTaped / RHTaped) as a between-participants factor. Hand was significant, F(2, 32) = 102.715, p < .001, η p 2 = .87. Maximum hand span was lower for the still-taped hand than for either the hand that was taped but with tape removed or the untaped hand (see Table 2). There was no effect of Tape Group, F(1, 16) = 0.037, p = .9, η p 2 = .002, or a Hand × Tape Group interaction, F(2, 32) = 2.912, p = .07, η p 2 = .15. Thus the taping manipulation significantly reduced maximum hand span by~4 cm regardless of which hand was taped. Discussion In Experiment 2, participants estimated blocks as larger when they grasped them using their taped rather than their untaped hand. Thus when, unlike in Experiment 1, the desired outcome was clearly and explicitly stated in the preexperimental instructions, participants produced scaling effects averaging~4%. This effect is modest, but is comparable to the original effect reported in Experiment 2 of Linkenauger et al. (2011), where objects to be grasped in the right hand were estimated as~3% smaller than objects to be grasped in the left hand. More generally, Firestone noted that many effects demonstrated by the action-specific account are only modest in size. He wrote that Bpaternalistic perceptual effects are the wrong size for the job^ (Firestone, 2013, p. 458). Our results are consistent with previous work suggesting that hypothesis guessing can influence performance to produce the effects reported in the action-specific literature (Durgin et al., 2012;Firestone & Scholl, 2014;Woods et al., 2009). The results of Experiments 1 and 2 suggest that in the size estimation task used both here and by Linkenauger et al. (2011, Experiment 2), participants may respond to demand characteristics from hypothesis guessing. However, this required instructions to be explict and overtly biased. Such extreme demand characteristics seem unlikely to explain the perceptual scaling effects reported in Experiment 2 of Linkenauger et al. (2011). In our final experiment we tried to resolve why scaling effects were obtained in Experiment 2 of Linkenauger et al. (2011) but not in Experiment 3 of Collier and Lawson (2017). We did this by examining the influence of a different type of demand characteristics on object size, namely context effects. Experiment 3 When action capacity and spatial properties are estimated in quick succession, and on every trial, as in Experiment 2 of Linkenauger et al. (2011), the experimental context may subtly imply that the two estimates are related, or the two types of estimates may become confused. Importantly, participants may not need to be aware of such context effects for them to occur, unlike explicit hypothesis guessing. Nevertheless, and importantly, scaling effects on spatial estimates arising from either type of demand characteristic are not genuine perceptual effects because the participant's visual representation of the environment is not altered (Firestone, 2013). On every trial in Experiment 2 of Linkenauger et al. (2011), participants estimated the graspability of an object immediately before estimating its apparent size. The dimensions of graspable-to-ungraspable and small-to-large may be conceptually linked, in a way similar to cross-sensory correspondences between sensory modalities (e.g. Walker, 2012;Walker et al., 2016). If so, then people may find it hard to assess them independently in a context where they are asked about both. We discussed this possibility in Collier and Lawson (2017) and referred to it as conflation. However, in our previous study, we did not test whether we could replicate Linkenauger et al.'s (2011, Experiment 2) scaling effect by introducing a context in which measures of spatial perception were likely to be combined or confused with estimates of action capacity. This was done in Experiment 3. Here, on every trial, participants rated how difficult the block had been to grasp (graspability) and then its size. Note that we were not interested in the results of the graspability task. The purpose of this task was to test whether drawing attention to the graspability of an object immediately before estimating its size would induce conflation between estimates of graspability and estimates of size. We reasoned that, in this conflation context, participants might estimate objects grasped in their taped hand as bigger than objects grasped in their untaped hand. Participants Eighteen participants (mean age = 19.6 years, two males) were recruited for this study. Participants all self-reported as righthanded and were rewarded with course credit for their time. Apparatus, stimuli, and procedure The apparatus, stimuli, and procedure were identical to those of the action capacity group in Experiment 1, apart from the following change. Participants completed an additional object graspability task on each trial. For this task, participants verbally rated the difficulty of grasping each block on a scale of 1 (very easy) to 10 (very difficult) after they had picked it up and placed it on the table. They then estimated the size of the block as in Experiment 1. Object graspability task We first tested whether participants rated blocks they had grasped in their taped hand as harder to grasp than blocks they had grasped in their untaped hand. Mean difficulty ratings were used as the dependent variable in a mixed ANOVA where Taping (taped/untaped) was a within-participants factor and Tape Group (LHTaped/RHTaped) was a betweenparticipants factor. Participants rated objects they had grasped in their taped hand (M = 3.4, SD = 1.2) as more difficult to grasp than objects they had grasped in their untaped hand (M = 2.4, SD = 0.8), F(1, 16) = 21.519, p < .001, η p 2 = .57. There was no effect of Tape Group, F(1, 16) = 0.814, p = .4, η p 2 = .05, or a Taping × Tape Group interaction, F(1, 16) = 1.202, p = .3, η p 2 = .07. Object-size estimation task We excluded two trials where the participant was unable to grasp the block in the manner specified using their taped hand (both were 13-cm trials) plus the two corresponding trials for that participant for their untaped hand. A further three trials were excluded due to invalid responses (e.g. pressing the Enter key without adjusting the distance between the lines). Perceived block size ratios were calculated as in Experiments 1 and 2. These ratios were the dependent variable in a mixed ANOVA with Taping (taped/untaped) as a within-participants factor and Tape Group (LHTaped/RHTaped) as a betweenparticipants factor. Taped hand estimates (M = 0.84, SE = 0.03) were greater than untaped hand estimates (M = 0.82, SE = 0.03), F(1, 16) = 4.936, p = . 041, η p 2 = .24. There was no effect of Tape Group, F(1, 16) = 0.771, p = . 4, η p 2 = .05, or a Taping × Tape Group interaction, F(1, 16) =1.208, p = . 3, η p 2 = .07 (see Fig. 4). As in Experiments 1 and 2, we ran Bayesian analyses to check the strength of evidence for the effects revealed by the ANOVA (see Table 6). Hand span as an estimate of action capacity We used participants' drawings around their outspread fingers to estimate their maximum hand span to check whether this was reduced by taping. A mixed ANOVA was conducted with Hand (still-taped/was-taped-but-tape-removed/untaped) as a within-participants factor and Tape Group (LHTaped / RHTaped) as a between-participants factor. Hand was significant, F(2, 32) = 48.980, p < .001, η p 2 = .75. Maximum hand span was lower for the still-taped hand than for either the hand that was taped but with tape removed or the untaped hand (see Table 2). There was no effect of Tape Group, F(1, 16) = 0.497, p = .5, η p 2 = .03, or a Hand × Tape Group interaction, F(2, 32) = 0.596, p = .5, η p 2 = .04. Thus, the taping manipulation significantly reduced maximum hand span by~4 cm, regardless of which hand was taped. Discussion In Experiment 3 participants rated objects as harder to grasp in their taped hand than in their untaped hand. They then went on to estimate blocks that they had grasped in their taped hand as larger than blocks they had grasped in their untaped hand. These results provide evidence for the suggestion by Collier and Lawson (2017) that the scaling effect reported by Linkenauger et al. (2011, Experiment 2) occurred because action capacity estimates were conflated with size estimates. This likely occurred because participants were asked to estimate graspability immediately before estimating object size on every trial. This influence of context would only need to occur occasionally to produce the modest scaling effects that have been observed (~3% in both Experiment 3 here and in Experiment 2 of Linkenauger et al., 2011). In Experiment 3 here, 11 out of 18 participants estimated blocks as larger for their taped hand than their untaped hand. Furthermore, participants appear able to independently estimate object Collier and Lawson (2017) found that when participants were explicitly instructed that grasping and size estimates were being collected for separate, unrelated experiments, there was no influence of grasping capacity on estimated object size. Together, these results indicate that the scaling effect reported by Linkenauger et al. (2011, Experiment 2) was not truly perceptual. General discussion In the present studies, we were interested in understanding the basis of biases that have previously been reported in the perception of object size and that have been interpreted as supporting the action-specific account. Specifically, Linkenauger et al. (2011) argued that apparent grasping capacity can influence perceived object size. However, we subsequently found no evidence to support this claim (Collier & Lawson, 2017). In the present studies, we sought to understand whether scaling effects were obtained by Linkenauger et al. (2011, Experiment 2), but not by Collier and Lawson (2017), because of differences in demand characteristics. In Experiment 1, we investigated whether leading instructions would bias estimates of object size due to participants explicitly hypothesis guessing. We reasoned that estimated object size could increase if perceived hand size increased (on a body size scaling account), or could scale in the opposite direction based on changes in perceived grasping capacity (consistent with the action-specific account; see Fig. 1). Neither of these predictions were supported: We found no evidence that participants adjusted their responses after inferring the desired outcome of the experiment based on the instructions they were given. We re-examined this issue in Experiment 2 using a more powerful manipulation. Here, the instructions clearly and explicitly specified the direction of the expected effect based on the action-specific account. Now participants produced results consistent with the expectations arising from their instructions: Blocks that were harder to grasp because they were picked up in the taped hand were estimated as larger than blocks that had been grasped in the untaped hand. Taken together, these results suggest that hypothesis guessing is an unlikely explanation for the results of Linkenauger et al. (2011, Experiment 2) because scaling effects were only obtained in Experiment 2, when we used unrealistically directive instructions. Orne (1962) stated that Bresponse to the demand characteristics is not merely conscious compliance^(p. 779) and that other, subtler, forms of demand characteristics can also influence participants' responses. Based on this suggestion, and our own proposal (Collier & Lawson, 2017) that conflation might explain Linkenauger et al.'s (2011, Experiment 2) results, in Experiment 3 we investigated whether the experimental context could implicitly influence performance. This was manipulated by having participants report an object's graspability immediately before estimating its size. Now we found the predicted scaling effect: Participants estimated blocks as larger after grasping them with their taped relative to their untaped hand. This suggests that Linkenauger et al.'s (2011, Experiment 2) scaling effect likely arose as a result of asking participants to report graspability before object size on every trial. We propose that their task encouraged a conflation between estimates of action capacity and spatial extent, so that the scaling effects that they observed did not reflect a change in perception in the strong sense proposed by the action-specific account. Our results expand on what is already known about demand characteristics in the action-specific literature by showing that these demand characteristics can take multiple forms. In Durgin et al. (2009) and Firestone and Scholl (2014), participants produced action-specific effects if no reason for a salient experimental manipulation was given, whereas participants who were given an explanation for the manipulation showed no effect. In these studies, action-specific effects seemed to occur only when participants guessed the experimental prediction. In contrast, the results of Experiment 1 here suggest that participants may not have explicitly guessed the experimental hypothesis in the object size-estimation task used by Linkenauger et al. (2011, Experiment 2). Nevertheless, the results of Experiment 3 here suggest that the scaling effect reported by Linkenauger et al. (2011, Experiment 2) could still reflect postperceptual demand characteristics due to an implicit context effect. Such context effects, like hypothesis guessing, are inconsistent with the explanation of scaling results provided by the action-specific account, namely, that participants actually see stimuli differently if their action capacity changes. We previously demonstrated that context effects can be overridden using instructions which carefully distinguish between estimates of action capacity and estimates of spatial qualities. The final experiment reported in Collier and Lawson (2017) was similar to Experiment 3 here in that we asked participants to first grasp and then estimate the size of blocks on the same trial. Unlike Experiment 3 here, the experimenter emphasised that they were interested in participants' grasping behaviour and said that they would record how participants grasped blocks on each trial. However, using a cover story about time constraints on data collection, participants were also told that the grasping task was producing data for a separate study to the size estimation task. In contrast to Experiment 3 here, we found no difference between size estimates made for objects grasped in taped compared to untaped hands in the final experiment of Collier and Lawson (2017). Thus, context effects were eliminated by telling participants that the tasks were separate, similar to the way in which hypothesis guessing was controlled for by Durgin et al. (2009), by giving participants a reason for wearing the backpack while estimating hill slant. Thus previous work has found that providing a convincing cover story can eliminate action-specific scaling effects (Collier & Lawson, 2017;Durgin et al., 2009Durgin et al., , 2012Firestone & Scholl, 2014) and that the use of leading instructions can induce these scaling effects (Woods et al., 2009). In contrast, we found no evidence that explicit hypothesis guessing influenced estimated object size in Experiment 1 here. We suggest that this may have been because the experimental hypothesis was relatively hard to infer in this task, particularly since the group-specific instructions did not specify the direction of the predicted effect. Consistent with this interpretation, we did obtain scaling effects in Experiment 2, when participants were directly told the expected results of the study. We have argued that scaling effects on estimates of object size may arise if these estimates are conflated with those of grasping capacity. Scaling effects were obtained in both Experiment 3 here and Experiment 2 of Linkenauger et al. (2011), when participants were actively and explicitly encouraged to think about and report their grasping capacity on every trial. It is important to emphasise that this context is unusual and does not reflect everyday life. Scaling effects were not obtained in the first four experiments reported by Collier and Lawson (2017), when participants were not encouraged to think about their grasping behaviour or capacity, even though they actually grasped blocks on every size estimation trial. Thus, scaling effects consistent with the actionspecific account seem to be context-dependent, such that they only appear under narrow, non-ecological conditions. Not all studies which have reported an influence of grasping capacity on estimated object size required participants to explicitly report their grasping capacity. For example, in Experiment 1 of Linkenauger et al. (2011), a disc was placed in the palm of the left and right hands of right-handed participants and they were asked which disk appeared larger. Participants also visually matched the size of the discs. In both tasks, the disks in the right hand were estimated as smaller than the disks in the left hand. Since participants did not have to report their grasping capacity, these results cannot be explained by context effects. There is though, an alternative explanation for these results which does not assume that action-specific scaling occurred. Right-handers have repeatedly been shown to believe that their right hand is larger than their left hand (Collier & Lawson, 2017;Linkenauger et al., 2009Linkenauger et al., , 2011, so the discs surrounded by a perceptually larger object (the right hand) may have appeared smaller than the discs surrounded by a perceptually smaller object (the left hand). In fact, Linkenauger et al. (2011, Experiment 1) themselves suggested that such a size-contrast effect could have caused the results they obtained, rather than that perceived object size was scaled according to grasping capacity. One reason that participants are asked to estimate their grasping capacity in studies supporting the action-specific account is to check perceived action capacity, since action-specific scaling effects are only predicted if people think that the action can be performed (Linkenauger et al., 2011;Witt, 2016). For example, only objects that people think they can grasp should be scaled; no effect should be found for objects larger than perceived maximum grasp (Linkenauger et al., 2011). One interesting issue, that has not yet been addressed, is whether scaling effects should be expected when objects are so small that they could be easily grasped regardless of whether they are grasped in the left or right hand, or indeed in a taped or untaped hand. Cañal-Bruland and van der Kamp (2015) suggested that distortions in spatial perception as a result of action capacity should be strongest at the critical boundaries for action. Investigating this hypothesis would be a valuable route for future research to pursue. In order to produce a large, robust, yet reversible effect on both perceived and actual grasping capacity we used a taping manipulation in the experiments reported here. This differed from the manipulation of perceived grasping capacity investigated in Experiment 2 of Linkenauger et al. (2011). They took advantage of the bias for right handers to overestimate both the size and the grasping capacity of their right hand relative to their left hand. This bias existed prior to the start of the experiment and may arise from a lifetime of experience using their right hand more than their left hand. There is also greater representation for the right hand than the left hand in the somatosensory cortex of right-handers (Sörös et al., 1999). Such differences could be argued to explain why our results differed from those of Linkenauger et al. (2011). However, we think this is unlikely. First, Experiment 3 of Linkenauger et al. (2011) manipulated hand size by magnifying the hand. Like our taping manipulation, this is a short-term, withinexperiment manipulation. Nevertheless, they reported differences in estimated object size when objects were placed next to the magnified, compared to the unmagnified, hand. Second, in our previous work, we found that participants rapidly updated their perceived grasping capacity after attempting to grasp objects with their taped hand (Collier & Lawson, 2017). This suggests that, although taping is a short-term manipulation, it is effective in influencing perceived grasping capacity. Thus, although our manipulation of grasping capacity differed to that used in Experiment 2 of Linkenauger et al. (2011), we believe our method is appropriate for investigating the effect they reported. Modular theories of perception claim that perception is cognitively impenetrable, meaning that it is not affected by higher-level cognition (Firestone, 2013;Firestone & Scholl, 2015). The action-specific account challenges cognitive impenetrability by suggesting that perception can be directly influenced by action capacity. However, here we only found effects consistent with the action-specific account when the experimental instructions explicitly stated the expected outcome (consistent with hypothesis guessing), or when participants estimated object size in a context which implied that their grasping capacity was relevant (consistent with context effects). If apparent grasping capacity can directly influence perceived object size, as the action-specific account claims (e.g. Linkenauger et al., 2011), then we should also have found scaling effects when hypothesis guessing and context effects were controlled for (e.g. in Collier & Lawson, 2017), but we did not. The effects we observed in the present studies therefore seem to reflect biases at the level of judgement as opposed to true perceptual changes. By extension, our results are consistent with the idea of cognitive impenetrability. In conclusion, the results of the present studies do not support the strong claim of the action-specific account that what we see is directly influenced by our action capacity. Our results instead suggest that the scaling effects on estimated object size that were interpreted as supporting the action-specific account by Linkenauger et al. (2011, Experiment 2) are more likely to have arisen from participants responding to subtle, easily overlooked cues within the experimental procedure. We are in agreement with Firestone and Scholl (2015) who observed: BIf there is one unifying message running through our work on this topic, it is this: The details matter^(p. 59).
2018-04-03T05:57:23.525Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "6837625759ca1b6cfdd89b5ae13960a508fc9f10", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.3758/s13414-017-1344-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6837625759ca1b6cfdd89b5ae13960a508fc9f10", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
234397568
pes2o/s2orc
v3-fos-license
Analysis of Amino Acid Sequence of SARS-CoV, SARS-CoV-2, and MERS-CoV Spike Glycoproteins: Preliminary Study for Obtaining Universal Peptide Vaccine Candidates In the manufacture of universal peptide vaccines, it is necessary to analyze the amino acids of the various candidates. Therefore, this study aims to examine the amino acids of the spike glycoproteins of SARS-CoV, SARS-CoV-2, and MERS CoV. The method used is the alignment of the amino acid spike glycoprotein between SARS-CoV with SARS-CoV-2, MERS CoV with SARS-CoV-2, and SARS-CoV with MERS-CoV using web-based software water emboss. The analysis result showed that SARS and SARS CoV-2 were very similar with 87% similarity and 76.4% identity values. In contrast, SARS CoV-2 with MERS and SARS with MERS were very different, having similarity and identity values less than 70%. Therefore, it is reasonable to conclude that spike glycoprotein's peptide is only useful from attacks by the SARS-CoV and SARS-CoV-2 viruses. Vaccines for MERS, SARS and COVID-19 are yet to be discovered (Slamet et al., 2013) but the process continues to experience development. In addition, it consists of several amino acid residues (small proteins) (Subroto et al., 2013) and with the existence, it is possible to make a universal vaccine. This can serve a protective function against various types of antigens, which is primarily used for making peptide vaccines. Therefore, a strong similarity protein is needed to make a universal vaccine against coronaviruses (SARS, (Alouane et al., 2020;Khalaj-Hedayati, 2020;Wu et al., 2020). There have been many research related to the manufacture of vaccine candidates, one of which is the HPV (Human Papillomavirus). This study shows that the promising vaccine peptide candidate of the E1 protein obtained from HPV genome is LLITSNINA, from E5 is VLLCVCLLI and from E7 is LLMGTLGIV (Aprilyanto & Sembiring, 2017). Furthermore, they have been tested in vitro, and the results are useful in activating the immune response. One of the conditions for making a peptide vaccine is that the protein antigen should be located at the outer part of the virus in order to ease the purification process. The spike glycoprotein is used as a peptide vaccine candidate since its position is in the outer part, and it is possessed by all types of Coronavirus. Therefore, this protein is used as a candidate source for peptide vaccines for all kinds of Coronavirus. Initially, the research was conducted to obtain an overview of its potential as a vaccine candidate. In addition, an explanation of their potential will be obtained by testing the similarity and identity for the amino acid sequence of spike glycoproteins. MATERIALS AND METHODS This research used the following materials; amino acid sequence of SARS-CoV-2, SARS-CoV (SARS), and MERS-CoV ( Table 1 represents the results of the alignment shown by the value of identity and similarity. The Identity is the percentage of identical matches between the two sequences over the reported aligned region (including any gaps in the length). The following are the results of alignment among the three viruses. The similarity is the percentage of matches between the two sequences over the reported aligned region (including any gaps in the length) (Taupiqurrohman et al., 2016). In addition, Identity value indicates the identical equation of the compared amino acids, while the similarity value indicates the conformity on chemical properties (Hui et al., 2020). Table 1 also shows that Coronaviruses of SARS and COVID-19 have a high similarity with an identity value of 76.4% and 87%. On the contrary, the comparison of MERS and COVID-19 is relatively not similar because the alignment results are below 70%. The low result is also shown by the comparison between SARS with MERS having 31.6% identity, and 47.3% similarity. This is consistent with the explanation of Andriani (2016) and , where it was stated that the coronaviruses of SARS and COVID-19 are very close based on evolution tree. According to Rice et al. (2000), phylogenetic results (evolutionary kinship) cannot be concluded because of the type of protein being compared. Therefore, this research has illustrated the great potential of spike glycoprotein to be the source of peptide vaccine candidates for the SARS and COVID-19 diseases. Below is the structure of the spike glycoprotein of SARS-CoV, SARS-CoV-2, and MERS based on the database (pdb.org). Work Principles of Universal Peptide Vaccine. The sequential analysis shows that spike glycoprotein can only be used as the source of peptide vaccine candidates for SARS and COVID-19. This should be properly conducted since the working principle of peptide vaccine is based on the immune system. The two common parts when a virus infects are the outside (specific body tissue) and the inside of an infected cell (body cell). When a part of the tissue is infected, the immune cells in the region begin to respond (Mothes et al., 2010;Mallapaty, 2020). This is evident in macrophages, which is one type of immune cell that is responsible for initiating the formation of antibodies through the activation of helper T cells. To activate this cells, macrophages will phagocytize the incoming antigen protein. Furthermore, the results of phagocytosis (small peptides) are raised to the surface of the body by major histocompatibility (MHC) class II protein to be recognized by helper T cell receptors . Andriani (2016) stated the predicted part and made into a peptide vaccine. During an internal cellular infection, the cell responds through a series of reactions (Fig. 4). An important part of this response in relation to the peptide vaccine is that the cell will attempt to bring the virus part to the surface. This is conducted by the MHC class I and recognized by cytotoxic T cells, which functions to reduce infection . The part of the virus raised by MHC I and II is another peptide vaccine candidate that is predicted by using the spike glycoprotein (marked in the box in the picture). This protein is a potential candidate for peptide vaccine since it is found on the outer part of the virus spike glycoprotein is also in the outer part of the virus, thus it is a potential candidate for peptide vaccine. Initially, it is recognized or attached to the cell surface, and the location is given below. Every disease has a cure. If the right medicine is found for a disease, the disease will be cured with the permission of Allah Azza wa Jalla (Sahih Muslim No. 4084). Based on this hadith, we can learn that there is no disease on this earth that was created by Allah swt. without a cure. As at present, many kinds of research have been carried out by scientists to find the most appropriate vaccine candidates for use in the prevention of infectious diseases caused by the coronavirus. The success of finding a vaccine candidate with the highest level of effectiveness is also inseparable from the power of Allah Almighty, as His word in QS. Ash-Shu'ara verse 80 (Kementerian Agama RI, 2019). This verse explains that it is Allah swt. who heals a man when he is sick. Allah has the power to heal any disease that a person has. But man, through the use of the mind by studying science, must also find out how to obtain this healing. Through science, humans can find out Vol 8(2), December 2020 Biogenesis: Jurnal Ilmiah Biologi 193 the types of amino acids from the glycoprotein spike of various types of coronaviruses that are most appropriate to be used in the production of universal peptide vaccines for various types of infectious diseases caused by various types of coronaviruses. The lesson that can be taken from this verse is that diseases experienced by humans are the result of human actions themselves, including infection with diseases caused by the coronavirus, one of which is the lack of a clean lifestyle. Through the efforts made by humans and by the will of Allah swt. diseases suffered by humans can be cured. Diseases that occur in humans can also be a reminder to always be grateful for the various blessings from Allah swt. One of which is the favor of healing from an illness. CONCLUSION SARS CoV-2 (COVID-19) and SARS are very similar with 87% similarity and 76.4% identity values. In contrast, covid-19 with MERS and SARS with MERS are very different because of their reduced similarity and identity values below 70%. Therefore, the spike glycoprotein can only be used as the peptide vaccine candidate for COVID-19 and SARS.
2021-05-13T00:03:30.515Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "bc7486092cebfd0257a892bc742a36573a992bae", "oa_license": "CCBY", "oa_url": "http://journal.uin-alauddin.ac.id/index.php/biogenesis/article/download/15696/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4cd92d84a87f6c7aa3b66537fa4120f9b1c06c44", "s2fieldsofstudy": [ "Biology", "Chemistry", "Computer Science" ], "extfieldsofstudy": [ "Biology" ] }
260899984
pes2o/s2orc
v3-fos-license
GHOST Commissioning Science Results II: a very metal-poor star witnessing the early Galactic assembly This study focuses on Pristine_ 180956 . 78 − 294759 . 8 (hereafter P180956, [Fe / H] = − 1 . 95 ± 0 . 02 ), a star selected from the Pristine Inner Galaxy Survey (PIGS), and followed-up with the recently commissioned Gemini High-resolution Optical SpecTrograph (GHOST) at the Gemini South telescope. The GHOST spectrograph’s high efficiency in the blue spectral region ( 3700 − 4800 Å) enables the detection of elemental tracers of early supernovae (e.g., Al, Mn, Sr, Eu), which were not accessible in the previous analysis of P180956. The star exhibits chemical signatures resembling those found in ultra-faint dwarf systems, characterised by very low abundances of neutron-capture elements (Sr, Ba, Eu), which are uncommon among stars of comparable metallicity in the Milky Way. Our analysis suggests that P180956 bears the chemical imprints of a small number (2 or 4) of low-mass hypernovae ( ∼ 10 − 15 M ⊙ ), which are needed to reproduce the abundance pattern of the light-elements (e.g., [Si, Ti/Mg, Ca] ∼ 0 . 6 ), and one fast-rotating intermediate-mass supernova ( ∼ 300 km s − 1 , ∼ 80 − 120 M ⊙ ). Both types of supernovae explain the high [Sr/Ba] of P180956 ( ∼ 1 . 2 ). The small pericentric ( ∼ 0 . 7 kpc ) and apocentric ( ∼ 13 kpc ) distances and its orbit confined to the plane ( ≲ 2 kpc ), indicate that this star was likely accreted during the early Galactic assembly phase. Its chemo-dynamical properties suggest that P180956 formed in a system similar to an ultra-faint dwarf galaxy accreted either alone, as one of the low-mass building blocks of the proto-Galaxy, or as a satellite of Gaia-Sausage-Enceladus. The combination of Gemini’s large aperture with GHOST’s high efficiency and broad spectral coverage makes this new spectrograph one of the leading instruments for near-field cosmology investigations INTRODUCTION Low-metallicity stars are among the oldest stars in the Galaxy.Cosmological simulations suggest that most metalpoor stars formed within 2−3 Gyr after the Big Bang, likely in low-mass systems that were accreted early on into the Galaxy ("building blocks", e.g., Starkenburg et al. 2017b;El-Badry et al. 2018;Sestito et al. 2021).These merging building blocks contributed stars, gas, and dark matter to the proto-Milky Way.Because they formed at the bottom of the potential well of the merging building blocks, these stars are predicted to occupy the inner regions of the present-day Galaxy (e.g., White & Springel 2000;Starkenburg et al. 2017b;El-Badry et al. 2018;Sestito et al. 2021).Systems accreted later are anticipated to disperse their stars primarily in the halo (Bullock & Johnston 2005;Johnston et al. 2008;Tissera, White & Scannapieco 2012), or possibly the disc (e.g., Abadi et al. 2003;Sestito et al. 2021;Santistevan et al. 2021).An in-situ component may also form from the gas deposited by the building blocks, recently called the Aurora stars (Belokurov & Kravtsov 2022).This in-situ component may have formed in a lumpy and chaotic interstellar medium (ISM), possibly resembling the chemical properties of globular clusters (Belokurov & Kravtsov 2023). The most chemically pristine stars in the Milky Way (MW) may have been enriched by only one or a few supernovae or hypernovae events (e.g., Frebel, Kirby & Simon 2010;Ishigaki et al. 2018).The study of the orbital properties and chemical abundance patterns of pristine stars is essential for understanding the lives and deaths of the first stars and the assembly history of the Galaxy (e.g., Freeman & Bland-Hawthorn 2002;Venn et al. 2004;Tumlinson 2010;Wise et al. 2012;Karlsson, Bromm & Bland-Hawthorn 2013). Metal-poor stars in and towards the Galactic bulge can serve as important tracers of the earliest stages of Galactic assembly, yet their detection is extremely challenging (e.g., Schlaufman & Casey 2014;Lamb et al. 2017).The inner regions of the MW are dominated by a metal-rich population and disrupted globular clusters (Ness et al. 2013a(Ness et al. , 2014;;Bensby et al. 2013Bensby et al. , 2017;;Schiavon et al. 2017;Schultheis et al. 2019).Furthermore, extreme interstellar extinction and stellar crowding have made photometric surveys of bulge metal-poor stars exceedingly difficult. The Abundances and Radial velocity Galactic Origins Survey (ARGOS, Ness et al. 2013b) found that ≲ 1 percent of their sample had [Fe/H]1 < −1.5, resulting in a total of 84 stars.The metallicity-sensitive photometric filter from the SkyMapper Southern Survey (Bessell et al. 2011;Wolf et al. 2018) has been used by the Extremely Metal-poor BuLge stars with AAOmega (EMBLA, Howes et al. 2014Howes et al. , 2015Howes et al. , 2016) ) survey to select very metal-poor stars (VMPs, [Fe/H] ≤ −2.0).Their high-resolution analysis of 63 VMPs revealed that the majority resembled chemically metal-poor stars in the Galactic halo, with the exception of a lack of carbon-rich stars and a larger scatter in [α/Fe] abundances.Additionally, their kinematic analysis found that it was challenging to distinguish stars that were born in the bulge from those that are merely in the inner halo.The Chemical Origins of Metal-poor Bulge Stars (COMBS, Lucey et al. 2019) studied the chemo-dynamical properties of inner Galactic stars, finding that around ∼ 50 percent of their sample is composed of halo interlopers, while their chemical properties resemble those of the halo (Lucey et al. 2021(Lucey et al. , 2022)). Similar to the EMBLA survey, the Pristine Inner Galaxy Survey (PIGS, Arentsen et al. 2020b,a) selected metal-poor targets from the narrow-band photometry of the Pristine survey (Starkenburg et al. 2017a).The Pristine survey, conducted at the Canada-France-Hawaii Telescope (CFHT), utilises the CaHK filter in combination with broad-band photometry to provide a highly efficient method of identifying low-metallicity stars (∼56 percent success rate at [Fe/H] ≤ −2.5, Youakim et al. 2017;Aguado et al. 2019;Venn et al. 2020;Lucchesi et al. 2022).Around ∼12,000 inner Galaxy metal-poor candidates selected by PIGS were observed with low-/medium-resolution spectroscopy using the AAOmega spectrograph on the Anglo Australian Telescope (AAT).The results of these observations showed ∼80 percent efficiency in identifying VMP stars towards the bulge (Arentsen et al. 2020a) and the Sagittarius dwarf galaxy (Vitali et al. 2022) using the Pristine metallicity-sensitive filter for initial selection.Interestingly, within PIGS, Mashonkina et al. (2023) reports the serendipitous discovery of the first r-and s-processes rich Carbon-enhanced star (CEMP−r/s) in the inner Galaxy. In a recent study, Sestito et al. (2023a) analysed highresolution spectra of 17 metal-poor stars selected from the PIGS survey taken with the Gemini Remote Access to CFHT ESPaDOnS Spectrograph (GRACES, Chene et al. 2014;Pazder et al. 2014).Their findings, consistent with Howes et al. (2016), indicate that the chemo-dynamical properties of the VMP population in the inner Galaxy resemble that of the halo, suggesting a common origin from disrupted building blocks.Sestito et al. (2023a) report stars with chemical abundances compatible with those of disrupted second-generation globular clusters (GC), one with exceptionally low metallicity ([Fe/H] ∼ −3.3), well below the metallicity floor of GCs ([Fe/H] ∼ −2.8, Beasley et al. 2019).This provides further evidence that extremely metalpoor structures (EMPs, [Fe/H] ≤ −3.0) can form in the early Universe (see also Martin et al. 2022, on the discovery of the disrupted EMP globular cluster, C-19). In this paper, we present a new analysis of the inner Galactic very metal-poor Pristine_180956.78−294759.8 (P180956), from spectra taken during the commissioning of the new Gemini High-resolution Optical SpecTrograph (GHOST, Pazder et al. 2020).This star was previously analysed in the PIGS/GRACES analysis (Sestito et al. 2023a), but it can be observed from either the northern hemisphere (GRACES) or the south (GHOST, at Gemini South).P180956 is selected for this work for its unusually low [Na, Ca/Mg] and [Ba/Fe] ratios and for its high eccentric orbit that remains confined close to the Milky Way plane.The new GHOST spectrograph has also been used in the analysis of two stars in the Reticulum II ultra-faint dwarf galaxy (Hayes et al. 2023), and the presentation of the spectrum of the rprocess rich standard star HD 222925 (Hayes et al. 2022;McConnachie et al. 2022).GHOST's wide spectral coverage and its high efficiency in the blue region are crucial to detect species that are tracers of the early chemical evolution (e.g., Al, Mn, Sr, Eu) and that were not accessible with GRACES spectrograph. The instrument setup and data reduction are described in Section 2. Sections 3 and 4 discuss the model atmospheres analysis and the chemical abundance analyses, respectively.Orbital parameters are reported in Section 5.The results are The GHOST tale of a PIGS star 3 discussed in Section 6, focusing on the type of supernovae that polluted the formation site of P180956, and its origin in an ancient dwarf system, which may have resembled today's ultra-faint dwarfs.Conclusions are presented in Section 7. GHOST observations The target (G = 13.50 mag), P180956, was initially observed as part of the PIGS photometric survey using Mega-Cam at the Canada-France-Hawaii Telescope (CFHT).Its small pericentric distance (rperi ∼ 0.7 kpc), the apocentre (∼ 13 kpc), the limited maximum excursion from the plane (Zmax ∼ 1.8 kpc), and its high eccentricity (ϵ ∼ 0.90) imply that it was likely accreted during the early stages of Galactic assembly (Sestito et al. 2023a).A low/medium-resolution (R ∼ 1, 300 and R ∼ 11, 000) spectrum of P180956 was obtained using AAT/AAOmega (Arentsen et al. 2020a), and also with the Gemini Remote Access to CFHT ESPaDOnS Spectrograph (GRACES, R ∼ 40, 000, Chene et al. 2014;Pazder et al. 2014) as part of LLP-102 (PI K.A. Venn).The GRACES spectrum was analysed by Sestito et al. (2023a) along with 16 other VMP stars in the Galactic bulge; however, due to limitations in the throughput of the 300-metre optical fiber, the bluest spectral regions were not accessible for that study.The GRACES spectral analysis was limited to a range of 4900−10000 Å only.Thus, P180956 was chosen as a commissioning target for the Gemini High-resolution Optical SpecTrograph (GHOST, Pazder et al. 2020).The instrument's high efficiency in the blue spectral region makes it ideal for detecting spectral features of additional chemical elements which allow a deeper investigation of the origins of P180956. The target was observed on September 12th, 2022, during the second commissioning run of GHOST.Three exposures, each lasting 600 seconds, were conducted.The instrument was configured in the standard resolution (R ∼ 50, 000) single object mode, employing a spectral and spatial binning of 2 and 4, respectively.The instrument setup allows to cover the 3630 − 5440 Å region with the blue arm and the 5210 − 9500 Å region with the red arm.This specific setup and exposure times were chosen to enable the detection of spectral lines from species in the bluer regions (∼4000 Å) of the spectrum that were not accessible with GRACES, i.e., C, Al, Si, Sc, V, Mn, Co, Cu, Zn, Sr, Y, La, Eu.Table 1 reports the Gaia DR3 source ID and photometry, the reddening from Green et al. (2019) along with the total exposure time and the number of exposures. Data reduction The acquired spectra were processed using the GHOST Data Reduction pipeline (GHOSTDR, Ireland et al. 2018;Hayes et al. 2022), which is integrated into the DRAGONS suite (Labrie et al. 2019).This pipeline2 generates 1D spectra for the blue and red arms, which were wavelength calibrated, order-combined, and sky-subtracted.Subsequently, Table 1.Log of the observations.The Pristine name, the short name, the source ID, G, and BP−RP from Gaia DR3, the reddening from the 3D map of Green et al. (2019), the total exposure time, the number of exposures, the SNR, and the radial velocity are reported.The SNR is measured as the ratio between the median flux and its standard deviation in three spectral regions, close to the Eu ii 4129 Å, in the Mg ib Triplet and in the Na i Doublet.).The lack of RV variation likely rules out the possibility that the object is in a binary system.The continuum points in the observations were identified using spectral templates and fitted via an iterative sigma clipping method to obtain a normalised spectrum.Finally, the blue and red output spectra were merged together with inverse-variance weighting in the overlapping regions. Property Figure 1 showcases the reduced spectrum of P180956.In the top panel, a comparison is made with the GRACES observation in the 4400 − 5200 Å region.Both instruments have similar spectral (high) resolution; however, the SNR of the GRACES spectrum (black line) deteriorates below 5000 Å, a region where the number of spectral lines clearly increases, as seen in the GHOST spectrum (blue).The Balmer line H−ϵ, the Ca ii H&K lines, along with two Al i lines, two Ti i, five Fe i lines, and one Co i line are shown in the central panel.The Mg ib Triplet (∼ 5180 Å) region also contains several Fe and Ti lines, as shown in the bottom panel.Table 1 reports the SNR measured close to the Eu ii 4129 Å line, the Mg ib Triplet region, and to the Na i Doublet (∼ 5890 Å). Stellar parameters The stellar parameters used in this study are adopted from the GRACES analysis by Sestito et al. (2023a, T eff = 5391 ± 133 K, logg = 1.87 ± 0.10).To briefly summarize, the effective temperature is estimated based on the colortemperature relationship derived by Mucciarelli, Bellazzini & Massari (2021).This relationship is based on the Infrared Flux Method introduced by González Hernández & Bonifacio ( 2009) and adapted to the Gaia EDR3 photometry.The surface gravity is determined by applying the Stefan-Boltzmann equation, inferred assuming a flat mass distribution between 0.5 to 0.8 M⊙.These calculations rely on several factors: 1) the de-reddened photometry6 ; 2) the distance to the star, which is estimated to be 3.30 ± 0.27 kpc (Sestito et al. 2023a); and 3) a metallicity (taken as [Fe/H] = −2.0 ± 0.1, Sestito et al. 2023a).Uncertainties on the stellar parameters are derived using a Monte Carlo simulation. The microturbulence velocity (vmicro) is obtained spectroscopically achieving a flatter distribution in the abundances from the Fe i lines, A(Fe i)7 vs. the reduced equivalent width.This gives a vmicro = 1.5 ± 0.1 km s −1 . Spectral lines and atomic data The spectral line list is generated with linemake (Placco et al. 2021), including lines with hyper-fine structure corrections (Sc, V, Mn, Co, and Cu), molecular bands (CH in the 4300 Å region) and r-process isotopic corrections (Ba, Eu).For CH, a ratio of 12 C/ 13 C = 5 has been assumed as for a typical RGB star (Spite et al. 2006).Solar abundances are taken from Asplund et al. (2009). An initial measurement of the equivalent width is performed using DAOSPEC (Stetson & Pancino 2008), which automatically fits Gaussian profiles to spectra following the input line list.Given the signal-to-noise ratio of our spectrum, lines weaker than 15 mÅ are rejected, and lines stronger than 100 mÅ are further examined with non-Gaussian measurements, i.e., with an integral.The equivalent widths are then used with the MOOG8 spectrum synthesis code (Sneden 1973;Sobeck et al. 2011) to determine the chemical abundances assuming Local Thermodynamic Equilibrium (LTE).The spherical MARCS9 model atmospheres (Gustafsson et al. 2008;Plez 2012), which assume [α/Fe] = 0.4, are used for the chemical abundance analysis in this paper, which yield to [Fe/H] = −1.95± 0.02.The chemical abundances of Sc, Cu, Y, Ba, La, and Eu are determined using the synth mode within MOOG. The list of spectral lines used for the chemical abundance analysis, their atomic data, their EWs and abundances are reported as supplementary online material. Uncertainties on the chemical abundances MOOG provides estimates of the chemical abundances A(X) along with their line-to-line scatter, δ A(X) .The total abundance scatters, δ A(X),TOT , are calculated by combining in quadrature the line-to-line scatter with the uncertainties resulting from variations in the stellar parameters (δT eff , δ logg , both ∼ 0.02), in the microturbulence velocity (δv micro ∼ 0.02) and in the metallicity (δ [Fe/H] ∼ 0.02).The final uncertainty for element X is given by σ A(X) = δ A(X),TOT / √ NX.In case there is only one spectral line or synth mode is employed, the dispersion in Fe i lines is considered as the typical dispersion. CHEMICAL ABUNDANCE ANALYSIS The blue wavelength coverage of GHOST permits an analysis of the spectral lines of many elements in metal-poor stars, including a range of α-, odd-Z, Fe-peak, and neutroncapture process elements. Carbon Carbon was first inferred from the low-/medium-resolution campaign of PIGS (Arentsen et al. 2020b), showing the star is Carbon-normal, [C/Fe] = 0.17 ± 0.24. Figure 2 shows the synthesis on the CH bands (Masseron et al. 2014) in the 4300 Å region (top panel) and the residuals (bottom panel).The synthesis is made with synth mode of MOOG, yielding a [C/Fe] = 0.2 ± 0.1, in agreement with the previous measurement.The evolutionary correction for [C/Fe] is inferred using the relation from Placco et al. (2014), which is 0.13 dex, providing a final [C/Fe] = 0.33 ± 0.10. α-elements The α-elements producing detectable lines in this star are Mg, Si, Ca, and Ti (Lawler et al. 2013;Wood et al. 2013;Den Hartog et al. 2021, 2023;Kramida et al. 2021).The A(Mg i) is from two lines of the Mg i Triplet (λλ5172.684,5183.604Å) and from other 5 lines for which the SNR is high (> 15).Si i is detected from the 4102.936Å line, which is blended with the wing of the broader Hδ line.The A(Ca i) is inferred from 17 spectral lines, from 4200 Å to 6500 Å.The Ca Triplet has been excluded since it shows strong lines (> 140 mÅ).Ti i and Ti ii are present with 17 and 30 lines, respectively. Odd-Z elements Four odd-Z elements are detectable in the GHOST spectrum of this object, Na, Al, K and Sc (Lawler et al. 2019;Roederer & Lawler 2021;Kramida et al. 2021).Na i is present with the Na i Doublet (λλ5889.951,5895.924Å).The Na i D lines from the interstellar medium are not affecting the Neutron-capture process elements The blue coverage of the GHOST instrument is essential to detect the neutron-capture process elements in the spectrum of this very metal-poor stars, namely Sr, Y, Ba, La, and Eu (Hannaford et al. 1982 ters logg = 2.3 (vs.1.87 ± 0.10) and vmicro = 1.0 km s −1 (vs.1.5 ± 0.1 km s −1 ) provide a negligible correction of ∼ −0.01 dex.Similarly, Ba ii NLTE corrections, adopted from Mashonkina & Belyaev (2019), are not available for this star.The closest match in their online database 12 gives a minor correction of ≲ 0.05.NLTE corrections for K are obtained from Ivanova & Shimanski ȋ (2000), which also include hyperfine structure corrections.We highlight that NLTE corrections on Fe, Ti, and Cr are helpful to obtain the ionisation balance among these species, i.e., A(X i) ≈ A(X ii). Table 2 reports the chemical abundances ratios in LTE, [X/H]LTE and [X/Fe]LTE, their uncertainties, the number of lines used, and the average NLTE corrections, ∆NLTE. Comparison with the GRACES spectral analysis One of the reasons that this star P180956 was selected as a GHOST commissioning target was its unusually low [Na/Mg], [Ca/Mg], and [Ba/Fe] ratios, either in LTE or NLTE (Sestito et al. 2023a).Thus, we examine our abundance results from this improved GHOST spectrum in comparison to those from the more limited GRACES spectrum. In general, the chemical abundance results from the GHOST spectra are similar to those from the previous GRACES analysis, as seen in Figure 4, both in LTE.The agreement is excellent for the majority of the elements (≲ 2σ), as expected given that we have adopted the same stellar parameters.The better quality of the GHOST spectrum at all wavelengths and the larger number of lines produce a smaller line-to-line dispersion, and, therefore, smaller uncertainties on [X/H]. Our Fe i abundance from GHOST is determined from 188 lines, whereas the GRACES analysis included only 63 lines, which improves the line-to-line scatter by a factor of ∼ 1.5. Larger differences (≳ 3σ) are found for Mg i, Ti ii, and Ni i.Looking at the previous analysis (Sestito et al. 2023a), this star has been analysed setting a lower metallicity value in the model atmosphere then the final one (by ∼ 0.4 dex) 13 .On the other hand, EW measurements of lines in common to both analyses are similar, to within ∼ 5−10 percent.Hence, the former should be the main culprit to the differences we found for Mg i, Ti ii, and Ni i. Comparisons with the MW halo and bulge The chemical abundances of P180956 are compared to a compilation of stars in the MW bulge and halo in Figure 5.The panels in the figure are arranged in order of increasing proton number of the species.The MW bulge compilation (small light blue circles) includes results from Howes et al. (2014Howes et al. ( , 2015Howes et al. ( , 2016)), Koch et al. (2016), Reggiani et al. (2020), and Lucey et al. (2022).The MW halo compilation (small grey squares) consists of stars obtained from the Stellar Abundances for Galactic Archaeology database14 (SAGA, Suda et al. 2008), with high-resolution analysis (R > 30000), no lower or upper limits on the measurements, and low un-certainties on the chemical abundances, σ [X/H] < 0.2.Both the compilations are from LTE analyses. The chemical ratios of [Na, Mg, Ca, Sr, Ba/Fe] and upper limit for [Eu/Fe] in P180956 are situated at the lower end of the distribution observed in the MW halo and bulge.This is particularly evident for [Ba/Fe], which is nearly 2 dex lower than the majority of stars at the same [Fe/H].In contrast, [Si, Ti, Sc, Co/Fe] ratios in P180956 are slightly enhanced compared to the literature compilations for the MW.Other upper limits (Cu, Y, La) do not provide significant constraints on the abundances. GALACTIC ORBIT Orbital parameters are derived using Galpy code (Bovy 2015), integrating the orbit for 1 Gyr in the future and in the past.Uncertainties are determined through a Monte Carlo simulation (1000 iterations) on the input quantities, drawn from a Gaussian distribution.Two Galactic gravitational potentials are adopted and the relative orbits are displayed in Figure 6.One potential corresponds to the model used in Sestito et al. (2023a), which includes a rotating bar (black line); the second (blue line) is without the bar as in Sestito et al. (2019).These two gravitational potentials lead to a very similar orbit. In both cases, P180956 exhibits a slightly prograde orbit (vertical angular momentum Lz ∼ 300 kpc km s −1 ), which is confined to the Milky Way plane (Zmax ∼ 2 kpc).The pericentric (Rperi ∼ 0.7 kpc) and apocentric (Rapo ∼ 13 kpc) distances indicate that the star's trajectory takes it very close to the Galactic centre before venturing far beyond the Sun's position, resulting in an orbit characterised by high eccentricity (ϵ ∼ 0.9).These results are in agreement within 1σ with the previous orbital analysis from Sestito et al. (2023a). Table 3 reports pericentric and apocentric distances, the eccentricity, the maximum excursion from the plane, the vertical component of the angular momentum as calculated using both gravitational potentials. DISCUSSION P180956 has peculiar kinematics that indicate that the star is confined to the Milky Way plane on a very eccentric orbit, reaching both the very inner region of the MW and a position well beyond the Sun (see Figure 6). Furthermore, P180956 showed a very low amount of [Ba/Fe], [Ca/Mg], and [Na/Mg].These signatures stand out when compared to MW halo stars, and resemble the abundances of stars found at present in ultra-faint dwarf galaxies.Low chemical abundance ratios in the elements listed above have been interpreted as a sign of contributions from a small number of low-mass supernovae type II (SNe II) in the past, e.g., the so-called "one-shot" model (Frebel, Kirby & Simon 2010).Therefore, the interpretation of the chemodynamical properties from the GRACES spectrum implied that this star may have formed in an ancient dwarf galaxy accreted very early in the MW formation.In the following subsections, a revised and more thorough discussion on the origin of this star and on the properties of its formation site is presented. The yields of the supernovae progenitors The wavelength coverage of GHOST and the SNR of the observed spectra allow the detection of up to 18 chemical species and four meaningful upper limits, for a total of 13 elements beyond those available in the previous GRACES spectral analysis.Comparing this extensive set of chemical abundances with the predicted yields of supernovae from theoretical models can be used for insights into the nucleosynthetic processes that occurred at the formation site of P180956.To accomplish this, the online tool StarFit15 is used.The yields of best fit are obtained by combining the type II supernovae yields of the best solutions chosen from a pool of theoretical models.A total of ten models have been selected, encompassing various types of supernovae events, i.e., hypernovae, core-collapse, rotating massive stars, neutron stars mergers, and pair-instability SNe. The theoretical yields [X/H] from contributing SNe are compared with the observational data in the top panel of Figure 7.The scaled solar abundances ratios (black line) are from Asplund et al. (2009), shifted to match the [Fe/H] of P180956.These fail to reproduce our data, except for Fe (by design), Na, and Mg.The solar abundances pattern predicts a net decrease in the yields from Si, which is not seen in P180956. The best fit solution (magenta line) from StarFit consists in five SNe II events originating from Population III stars.This fit is a mixture of three low-mass hypernovae16 with M ∼ 10 M⊙ (Heger & Woosley 2010;Heger et al. 2012) (Howes et al. 2014(Howes et al. , 2015(Howes et al. , 2016;;Koch et al. 2016;Reggiani et al. 2020;Lucey et al. 2022), while small grey squares correspond to the halo compilation from the SAGA database (Suda et al. 2008).The uncertainties on [X/Fe] for P180956 are often smaller than the symbol size.The scarcity of literature stars in the [K/Fe] panel may be attributed to the difficulty in measuring K spectral lines in the optical range, as they can be blended with telluric water vapour lines.The difference between the observed [X/H] and the theoretical yields from the best (magenta circles) and second best (orange squares) fits are displayed in the bottom panel of Figure 7.Both solutions provide a difference below ≲ 0.3 in absolute value for the majority of the species. We want to emphasise that the use of StarFit is more as an illustration of the range of events that might be needed to explain the chemistry of this star.StarFit suggests an interpretation that contributions from high-mass supernovae (> 140 M⊙, e.g., pair-instability) and neutron-star mergers are ruled out for this star.The former would produce a strong odd-even effect in the yields (Takahashi, Yoshida & Umeda 2018;Salvadori et al. 2019), i.e., very low [Na, Al/Mg] (∼ −1.3) and high [Ca/Mg] (≳ +0.6); the latter would produce an enrichment in neutron-capture elements (e.g., Cowan et al. 2021).Both scenarios are in contrast with the observed chemical properties of P180956.StarFit suggests that the main contributing SNe II are in the low-mass range.Specifically, hypernovae are necessary to produce the high [Si, Ti/Mg] ratios, and they provide a little contribution to heavy elements.These findings align with the scenario proposed by Ishigaki et al. (2018), where low-mass hypernovae (≲ 40 M⊙) are the primary sources of enrichment of the interstellar medium during the early stages of chemical evolution.In addition, the presence of one fast-rotating intermediate-mass supernova is needed to well reproduce the pattern of the heavy elements, i.e., Sr, Ba. In-situ vs. accreted diagnostics What if this star formed in-situ, i.e., in the proto-disc but after the early Galactic assembly?The period between the early MW assembly and the formation of the disc, dubbed "Aurora" (Belokurov & Kravtsov 2022), has been proposed to be very chaotic, forming bound massive clusters, chemically similar to globular clusters (Belokurov & Kravtsov 2023).This implies that some Aurora stars would be enriched in N, Na, and Al, while others would resemble the "normal" halo stars (Belokurov & Kravtsov 2023).P180956 is Na-poor, Al-normal, and neutron-capture process poor, with the latter being very rare for halo stars at that metallicity (see Figure 5).This chemical pattern rules out the "Aurora" as the origin for P180956. Other studies have suggested [Mg/Mn] vs. [Al/Fe] as a diagnostic to differentiate accreted stars from in-situ stars (e.g., Das, Hawkins & Jofré 2020;Horta et al. 2021).Figure 8 illustrates this chemical space, including APOGEE DR17 (Abdurro'uf et al. 2022) stars for comparison.Dashed black lines delineate three regions where accreted, in-situ low-α, and in-situ high-α stars are more likely to be found (e.g., Das, Hawkins & Jofré 2020;Horta et al. 2021).In the LTE case, P180956 lies close to the centre of the accreted "blob", while considering NLTE corrections, the star falls into the in-situ low-α region. Recently, Horta et al. (2023) provide a summary of the chemo-dynamical properties of known accreted structures based on the latest Gaia and APOGEE data releases.Their Figure 13 displays the [Mg/Mn] vs. [Al/Fe] space for all the accreted known structures, showing that some of their stars have [Mg/Mn] ∼ 0.0 or even negative (e.g., Sagittarius, Sequoia, Gaia-Sausage-Enceladus). Therefore, part of the insitu low-α region, is actually accreted.The accreted region in Figure 8 is here tentatively extended (accreted low-α), by lengthening the dashed green line, which suggests that P180956 is of accreted origin.Offsets between the NLTE infra-red (APOGEE) and the NLTE-corrected optical anal- yses are estimated to be ∆[Mg/Mn] = +0.15± 0.18 and ∆[Al/Fe] = −0.11± 0.12 (Jönsson et al. 2020).These corrections, not applied in Figure 8, would move P180956 more closely to the accreted low-α "blob".Can P180956 be associated with any of the recently discovered accreted structures?The system that bears the closest resemblance to P180956 in terms of kinematical properties (see Horta et al. 2023) is Gaia-Sausage-Enceladus (GSE, e.g., Helmi et al. 2018;Belokurov et al. 2018).The high eccentric orbit of GSE (0.93 ± 0.06) and its mean pericentric and apocentric distances (rperi = 0.61 ± 1.03 kpc and rapo = 17.15 ± 5.22 kpc) are compatible with P180956's orbital parameters within 1σ (see Table 3).However, stars of GSE reach a maximum height from the plane that is typically greater than the one of our target (Zmax = 9.84 ± 6.14 kpc vs. Zmax = 2.01 ± 0.35 kpc).Helmi et al. (2018) demonstrate that the GSE population has lower [α/Fe] ratios compared to stars in the MW halo.Similarly, Hayes et al. (2018) 5) ratios than the majority of MW halo stars at the same metallicity.The [Sr/Fe] of P180956 is similar to the bulk value of DGs, while UFDs have lower values.On the opposite behaviour is [Ba/Fe], where P180956's ratio is similar to those in the low-end distribution of UFDs.The lowabundance of neutron-capture processes elements has been interpreted as the contribution of low-mass supernovae and the absence of neutron stars mergers events (e.g., Cowan et al. 2021).Furthermore, the combinations of stochasticity in the production of neutron-capture elements with the inability of UFDs to retain metals can explain the low [Sr, Ba, Eu/Fe] observed in these systems (e.g., Venn et al. 2012;Ji et al. 2019). The distribution of [Sr/Ba] vs. [Ba/Fe] is shown in the right panel of Figure 9. Halo stars exhibit a downward trend as [Ba/Fe] increases (Mashonkina et al. 2017b), albeit mostly clumped around [Sr/Ba] ∼ 0.3 (Ji et al. 2019).Both UFDs and DGs present a wide distribution in [Sr/Ba] (up to ∼ 1.5 dex), with UFDs populate a distinct region from the majority of MW halo stars and of DGs (Mashonkina et al. 2017b;Roederer 2017;Ji et al. 2019;Reichert et al. 2020;Sitnova et al. 2021).P180956 has a relatively high [Sr/Ba] ratio, close to the upper-end of the distributions of the MW halo, Coma Berenices (UFD), and Sculptor (DG), while its [Ba/Fe] is typical of an UFD's star. Did this star originate in an UFD or in classical DG?At the metallicity of P180956, DGs would likely involve contributions from asymptotic giant branch stars (AGBs) and, in some cases, SNe Ia (see de los Reyes et al. 2020;Sestito et al. 2023b, for Sculptor and Ursa Minor).AGBs would produce Ba via s-process nucleosynthesis (e.g., Pignatari et al. 2008;Cescutti & Chiappini 2014), reaching solar values.Additional contribution from SNe Ia would lower the overall [α/Fe] ratios to solar or sub-solar values.Given P180956 has a very low-Ba ([Ba/Fe]NLTE ∼ −1.7) and the inhomogeneity in the α-elements ([Si, Ti/ Mg, Ca] ∼ 0.5), the contributions of AGBs and SNe Ia are likely ruled out, and, as well, the DG scenario. How to explain the high [Sr/Ba]?Mashonkina et al. (2017b) discuss that sub-solar [Sr/Ba] implies that both elements are produced solely by r-process, while solar-and supersolar-[Sr/Ba] indicate the involvement of s-processes in the Sr production.Various kinds of supernovae events have been proposed to explain the relative excess of Sr compared to other neutron-capture elements (Mashonkina et al. 2017b, and references therein).Hypernovae (Izutani, Umeda & Tominaga 2009) and s-process nucleosynthesis in lowmetallicity fast-rotating supernovae (Pignatari et al. 2008;Banerjee, Qian & Heger 2018;Grimmett et al. 2018;Limongi & Chieffi 2018) are also listed among these, which are always invoked by the best fit models from StarFit (see Section 6.1).We note from StarFit that the fast-rotating supernova is the event the contribute most to the [Sr/Ba] enrichment. P180956, witness of the early Galactic assembly Above we have outlined the most likely origin for P180956 as from an accreted UFD.Two new questions now arise whether the star was brought in during the early Galactic assembly or later on and if its system was isolated or brought in with one of the massive known accreted satellites.P180956 was also selected for GHOST commissioning because of its peculiar orbital parameters, as it remains relatively close to the MW plane (see Figure 6).Recently, the presence of this population has been observed from the ultra metal-poor regime (UMP, [Fe/H] ≤ −4.0, Sestito et al. 2019) to the disc's metallicity (Sestito et al. 2020;Di Matteo et al. 2020;Venn et al. 2020;Cordoni et al. 2021;Mardini et al. 2022), finding that the majority of them moves in a prograde orbit.While VMP "planar" stars have been dynamically detected in various investigations, thorough and detailed analyses of their chemical properties are scarce. The existence of this population has also been explored through high-resolution cosmological zoom-in simulations (Sestito et al. 2021;Santistevan et al. 2021).The simula-tions17 predict the presence of a "planar" population.While previous observational studies focused on the origin of loweccentricity stars, Sestito et al. (2021) also discussed that the more eccentric members of the planar population are likely brought in during the early-assembly phase.This is because, during that epoch, the gravitational potential of the forming proto-Galaxy is still shallow, allowing for the deposit of accreted systems into the inner regions.The relatively small excursion from the MW plane, the pericentric and apocentric distances, in addition to the high eccentricity suggest that the star was brought in during the earlyassembly phase. In Section 6.2, we rule out that P180956 formed in one of the known accreted structures, however, given 1) that some of its orbital parameters are similar to those of GSE (except for Zmax), 2) its formation likely in an UFD-like system (see Section 6.3), and 3) its early accretion to the MW, two scenarios on the origin of P180956 are proposed.The first is that P180956 originate in one of the many lowmass building blocks that formed the proto-MW, which had chemical and physical properties similar to those of present UFDs; the second is that its progenitor system was an UFD satellite of GSE, that has been brought in into the inner Galaxy during the infall of GSE. CONCLUSIONS This work represents the second high-resolution spectroscopic analysis utilising the GHOST spectrograph mounted at Gemini South, following the detailed analysis of two r-process rich stars in the Reticulum II ultra-faint dwarf galaxy (Hayes et al. 2023).The spectra of P180956, a star with unique chemo-dynamical properties, were observed during the second commissioning run of the instrument in September 2022.Previously, the star was observed with GRACES at Gemini North and analysed as part of the Pristine Inner Galaxy Survey (Arentsen et al. 2020b;Sestito et al. 2023a).In this study, we conducted a comprehensive analysis of the chemo-dynamical properties of P180956, leading to the following results: (i) The high efficiency and wide spectral coverage of the GHOST instrument (see Figure 1) enabled the detection of approximately 20 atomic species (see Figure 5), which is almost twice than what was possible to measure with GRACES (Kielty et al. 2021;Jeong et al. 2023;Sestito et al. 2023a,b;Waller et al. 2023).These species provide crucial insights into the origin and chemical properties of P180956 and its formation site. (iii) These combinations of supernovae events resulted in a composition of α-elements with solar Ca and Mg (iv) The specific combination of supernovae yields led to low abundances of neutron-capture elements (Sr, Ba, Eu), with a relatively high [Sr/Ba] ratio (see Figure 9).This can be explained with the additional s-process channels for Sr production that occur mostly in fast-rotating supernovae. (v) The kinematical properties of P180956 (see Figure 6) suggests the star was likely accreted during the earlyassembly phase of the Milky Way.Its [Mg/Mn] ratio is also indicative of its accreted origin (see Figure 8). (vi) None of the known accreted structures exhibit chemo-dynamical properties resembling perfectly those of P180956.Only Gaia-Sausage-Enceladus has similar eccentricity, apocentric and pericentric distances to P180956. (vii) P180956 originated in an ancient system chemically similar to present ultra-faint dwarf galaxies, given the low amount of neutron-capture elements (see Figures 5 and 9), either accreted alone or dragged in with Gaia-Sausage-Enceladus as its satellite. The advent of the GHOST high-resolution spectrograph has been invoked by various chemo-dynamical investigations targeting the MW and its satellite systems (e.g., Sestito et al. 2023a,b;Waller et al. 2023).This study, along with Hayes et al. (2023) and Dovgal et al. (2024), demonstrates that the combination of the Gemini South's large aperture and GHOST's high efficiency and wide spectral coverage is ideal for investigating low-metallicity stars in the Milky Way and nearby systems.The synergy between GHOST and the Gaia satellite will undoubtedly propel Galactic Archaeological studies forward. Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.On behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France (Wenger et al. 2000).This work made extensive use of TOPCAT (Taylor 2005). Figure 3 . Figure 3. Sr and Ba lines.Top panel: one of the Sr lines observed in the GHOST spectrum of P180956.Bottom panel: One of the Ba lines observed is marked with a blue line, while synthetic spectra are denoted by black ([Ba/Fe] = −2.0),red ([Ba/Fe] = −1.5, best fit), and olive ([Ba/Fe] = −1.0)lines. FeIFigure 4 . Figure 4. GHOST vs. GRACES chemical abundances, [X/H].GRACES abundances are from Sestito et al. (2023a).Both datasets are in LTE.Points are slightly offset in the horizontal direction to better show their errorbars.Uncertainties on GHOST data are often smaller than the size of the marker. Figure 5 . Figure5.Chemical abundances.Blue and red circles represent the chemical abundances of P180956 in LTE and NLTE-corrected, respectively.Blue triangles denote upper limits for P180956.Small light blue circles represent the bulge compilation(Howes et al. 2014(Howes et al. , 2015(Howes et al. , 2016;;Koch et al. 2016;Reggiani et al. 2020; Lucey et al. 2022), while small grey squares correspond to the halo compilation from the SAGA database(Suda et al. 2008).The uncertainties on [X/Fe] for P180956 are often smaller than the symbol size.The scarcity of literature stars in the [K/Fe] panel may be attributed to the difficulty in measuring K spectral lines in the optical range, as they can be blended with telluric water vapour lines. Figure 6 . Figure 6.Galactic orbit of P180956.Left panel: Height from the plane Z vs. in-plane projected distance R XY .Central panel: Y vs. X Galactic positions.Right panel: Z vs. X Galactic positions.Black and blue lines represent the orbit integrated in a gravitational potential with and without the presence of a rotating bar as in Sestito et al. (2023a) and as in Sestito et al. (2019), respectively.The red, black and green circles mark the position of P180956, of the Galactic centre (GC), and of the Sun at the present day. abundances and enhanced Si and Ti abundances (see Figures 5 and 7). Table 2 . Chemical abundances [X/H] in LTE, their final uncertainties σ [X/H] , already divided by the square root of the number of lines, the [X/Fe] ratios in LTE, the number of spectral lines used N lines , and the NLTE corrections ∆ NLTE = [X/H] NLTE − [X/H] LTE .Upper limits are expressed within 1σ. Table 3 . Pericentric and apocentric distances (R peri , Rapo), the eccentricity (ϵ), the maximum excursion from the plane (Zmax), and the vertical component of the angular momentum (Lz) are reported. Asplund et al. (2009) and blue circles are NLTE and LTE [X/H] of P180956, respectively.Upper limits of Cu, Y, and Eu are marked with empty downward blue triangles.Top panel: Black dots and line represent the solar scaled abundances fromAsplund et al. (2009)shifted to match the [Fe/H] of P180956.Magenta and orange dots and lines represent the theoretical yields from the best (5 SNe II events) and second best fit (3 SNe II events), respectively.Lower limits of Sc and Cu in the theoretical models are denoted with empty upward triangles.Bottom panel: The difference between the observed [X/H] and the yields predicted by the best (magenta circles) and second best (orange squares) fit from StarFit.Upper limits in the data and lower limits in the modelled yields are removed.Uncertainties on the data are summed in quadrature to 0.15 for the theoretical yields.The horizontal dashed line marks the null difference. provide evidence for a low-α population, showing that its stars exhibit [Mg, [Mg/Mn] vs. [Al/Fe] space.P180956 data are shown in blue and red in LTE and NLTE-corrected ([Mg/Mn] only), respectively.Small grey dots are APOGEE DR17 MW stars selected to have SNR > 70 and [Fe/H] < −0.7, which are derived from an NLTE analysis.The three regions delimited by the dashed black lines, accreted and in-situ low-/high-α are defined following Das, Hawkins & Jofré (2020) and Horta et al. (2021).The accreted region has been prolonged following the dashed green line, in which accreted low-α stars should lie.
2023-08-16T06:41:00.306Z
2023-08-14T00:00:00.000
{ "year": 2024, "sha1": "4cee84f5f2b5adcb9a9032c2a993ef99a201e633", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stae244/56609468/stae244.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "e91ed2326b9f965f924b2eccc042281f60923080", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269491736
pes2o/s2orc
v3-fos-license
A pragmatical physics-based model for predicting ladle lifetime Synopsis In this paper we develop a physics-based model for lining erosion in steel ladles. The model predicts the temperature evolution in the liquid slag, steel, refractory bricks, and outer steel shell of the ladle. The flow of slag and steel is due to forced convection induced by inert gas injection, vacuum treatment (extreme bubble expansion), natural convection, and waves caused by the gas stirring. The lining erosion takes place by dissolution of refractory elements into the steel or slag. The mass and heat transfer coefficients inside the ladle during gas stirring are modelled based on wall functions which take the distribution of wall shear velocities as a critical input. The wall shear velocities are obtained from computational fluid dynamics (CFD) simulations for a sample of scenarios, spanning out the operational space, and a model is developed using curve fitting. The model is capable of reproducing both thermal and erosion evolution well. Deviations between model predictions and industrial data are discussed. The model is fast and has been tested successfully in a ‘semi-online’ application. The model source code is available to the public at [https://github.com/SINTEF/refractorywear]. Introduction In the steel industry ladles are frequently used to keep, process, or transport steel.Ladles are designed to typically hold metal masses ranging from 80 to 300 t (Figure 1).The melt typically consists of hightemperature liquid steel and some slag, which when interacting with the inner wall of the ladle will harm the wall integrity and cause significant wear.In order to reduce the wear, temperature-resistant and chemically resistant refractory bricks are applied to build an inner barrier, typically three layers of wear bricks (inner lining) which should last for a long time in contact with the liquid steel, and at the same time protect the ladle from showing hot areas.In this paper we address the inner lining erosion of a ladle utilized in secondary metallurgy (SM) at a Sidenor plant.Sidenor is the largest manufacturer of special steel long product in Spain, and is an important supplier of calibrated products in Europe. During SM many processes may be going on.The SM ladles have a porous plug installed at the bottom.Gas (Ar or N2) is injected through the plug to induce liquid steel stirring.The rising flow of the liquid steel promotes the transfer of inclusions from the steel to the slag, and homogenizes the temperature and chemical composition. The main objective of SM is to obtain the correct chemical composition and appropriate temperature for the casting process.In addition, there are several important processes which must be complete during SM, for example the removal of inclusions and gases.In order to attain these objectives, Sidenor has a SM mill consisting of two ladle furnaces (LFs) and a vacuum degasser (VD).Each of the LFs has three electrodes for heating the slag, steel, and ferro-additions.The ladle contains steel and slag for the whole production process, from EAF tapping to the completion of the casting process.The liquid steel in the ladle has a temperature of around 1700 K, and is covered with slag.The slag, which helps to remove impurities from the steel and prevents contact between the steel and atmosphere, has lower density than steel.The slag consists basically of lime and oxides.Slag conditioning can be improved during SM by adding slag-formers such as lime and fluorspar.In order to handle the liquid steel and slag at such high temperature, the ladle is built with a strong outer steel shell lined inside is with layers of insulating refractory ceramic materials.The most important properties of the refractory lining are: ➤ Ability to withstand high temperatures ➤ Favourable thermal properties ➤ High resistance to erosion by liquid steel and slag. The inner layer of refractory bricks, which is in contact with the liquid steel, is eroded by the interaction with the hot metal and the slag.Each heat erodes the refractory bricks, and after several heats, they are so eroded that it is not safe to continue using the ladle.The refractory is visually checked after each heat and depending on its state, it may be used for another heat, put aside for repair, or demolished.In case of repair, the upper bricks of the ladle, which are more eroded due to chemical attack by the slag, are replaced and the ladle is put back into production.Later, based on continuing visual inspection, the ladle may be deemed ready for demolition.In this case, the entire inner lining is removed and the ladle relined with new bricks. One important goal for Sidenor is to reduce refractory costs by identifying new methods for extending the refractory life.One of the key points is to increase the number of heats without compromising safety.Another important issue is to better understand the mechanism of refractory erosion, in order to improve working practices and so increase ladle lifetime. Motivation The main goal of this investigation is to develop a model whose results, in conjunction with operator experience, can indicate whether the ladle can be safely used for another heat.The model should incorporate both historical and current production data. In addition, the model should provide information about the major factors that contribute to ladle refractory erosion and indicate practises that could be adopted to extend refractory life. Previous work on ladle lining erosion Several studies have been published dealing with properties of refractory bricks (Mahato, Pratihar, and Behera, 2014;Wang, Glaser, and Sichen, 2015), advising on improvements to produce high-quality bricks.A more general review of MgO-C refractories was given by Kundu and Sarkar (2021).The corrosion-erosion mechanisms have been studied in a few papers (Kasimagwa, Brabie, and Jönsson, 2014;Jansson, 2008;Mattila, Vatanen, and Harkki, 2002;Huang et al., 2013;LMM Group, 2020;Zhu et al., 2018).In the opinion of these authors, the most thorough approach was that of Zhu et al. (2018).Bai et al. (2022) investigated the impact of slag penetration into MgO-C bricks. In order to predict refractory erosion, temperature, fluid composition, and mass transfer mechanisms must be considered.The heat balance has been studied in some specialized works (Çamdali and Tunç, 2006;Glaser, Görnerup, and Sichen, 2011;Zimmer et al., 2008;Duan et al., 2018;Zhang et al., 2009).The effect of slag composition has been studied in multiple works (Bai et al., 2022;Jansson, 2008;Kasimagwa, Brabie, and Jönsson, 2014;Mattila, Vatanen, and Harkki, 2002;Sarkar, Nash, and Sohn, 2020;Sheshukov et al., 2016;Zhu et al., 2018).A critical step in developing prediction models is the local mass transfer between the lining and slag/metal.This has to date been treated by semiempirical models Bai et al., (2022); Sarkar, Nash, and Sohn, 2020;Wang et al., 2022).Wang et al., (2022) applied 3D computational fluid dynamics (CFD) and their predictions seemed to agree with observations.However, they did not report the diffusivities used in their model, and the underlying erosion-corrosion models were empirical and tuned to the data.It was found that these tuning factors would depend on the operating conditions. In industry, refractory wear is known to be a result of (i) thermal stresses, resulting in thermal spalling, (ii) dissolution of the refractory bricks into the slag/metal, and (iii) dissolution of the binder materials into the slag/metal.Moreover, mechanical stresses imposed on the refractory during cleaning operations will impact on erosion and lifetime.Phenomena such as spalling due to hydration of the bricks are also involved (Wanhao Refractory, 2023). The impact of thermal stresses will be most severe at the bottom of the ladle when hot steel meets colder refractory.As the velocity of the metal at the moment of impact is high, this is where the maximum thermal stresses are expected.The colder the ladle wall is when it meets hot steel at high speed, the greater the risk of crack formation. It must be noted that the time between heats has a significant effect on thermal spalling.The temperature distribution in the ladle refractory wall at the time of filling is an important parameter that can be predicted using the model to be presented.However, the addition of a heating burner at the ladle waiting station is not included for now.Instead, we simulate a reduced waiting time to mimic the effects of using a burner to maintain refractory temperature. The pragmatism-based approach to a model for ladle lining erosion In previous publications, the authors defined a methodology 'Pragmatism in industrial modelling' (Johansen and Ringdalen, 2018;Johansen et al., 2017;Zoric et al., 2015), which is especially suited for developing fast and sufficiently accurate industrial models.In a twin paper (Johansen et al., 2023) the authors have outlined the methodology that was applied in this work and the learnings that may be exploited in future projects.Here the details of the physics-based model are explained. The objective of the model is to be able to advise or support operators in assessing if it is safe to use the ladle for another heat.In such an application, the erosion state of the refractory must be updated from heat to heat and a simulation for a subsequent virtual heat performed.The virtual heat should contain as much information as possible about the next heat.The result of such a simulation and visual or optical inspection of the lining would then lay the foundation for the final assessment. Model simplifications and assumptions The pragmatic model must be fast as we wish to simulate a transient ladle operation, lasting about two hours, in less than a minute.This is critical as we wish to simulate all ladle operations within a year in a few hours in order for the results to be applied directly in production, to carry out tuning, or perform a parameter sensitivity analysis. Figure 2 gives some ideas about the phenomena involved.The heating elements (electrodes) can be submerged in the slag, or work from above.They produce electric arcs that heat the liquid steel.The flow of the slag and liquid steel is not only a function of the gas flow rate applied for blowing, but is also influenced by several factors such as the mass of steel and slag, vacuum pressure, and the thermophysical properties of the fluids. The ladles are 3D objects, but due to speed requirements some overall model simplifications were done: i. The model is 2D (cylinder-symmetrical) with the porous bottom plug placed in the centre.As a consequence, we assume that the gas/steel/slag flows can be regarded as rotationally symmetric ii.The stirring gas is inert (only provides mixing) iii.In the sidewalls only the radial heat balance is included iv.In the bottom only vertical heat balance is included v.The solubility of MgO in the slag and of C in the steel are assumed constant vi.The metal and the slag phases are stratified and are assumed to be internally perfectly mixed.The phases exchange mass and energy with each other and the refractory vii.Above the slag energy is exchanged by radiation only viii.Refractory erosion due to thermomechanical stresses is not considered. Volumes and mass balances As the model will compute situations with different amounts of steel and slag in the ladle, we have to take into account all these possible situations.The total volume of the slag and metal is represented by Accordingly, the mass of liquids inside the ladle is: [2] In our first approach, we neglect the volumes of the protruding impact element at the bottom and the volumes modified by eroded bricks.In this case, the metal-slag interface is positioned at height [3] and the thickness of the slag layer is: The mass balance for the ladle must also be respected.That is, for the slag Typically a slag former of type k, total mass mslag,k, can be assumed to be added during one numerical time step, between time t n and t n+1 , such that [6] For the metal we have: [7] M . slag,tapped and M .steel,tapped are the transient mass flow rates of slag and steel tapped out of the ladle.Similarly, m .alloy,k is the mass flow rate of added alloy of type k.As for the slag, an alloy of type k, total mass malloy,k, can be assumed to be added during one numerical time step between t n and t n+1 , such that [8] Based on Equations [5]-[8], the phase densities, the purge gas fractions present in each phase, and corrections for the eroded ladle radius, we can compute the transient interface position for the metal and slag.This is critical input to the thermal and erosion models. Thermal models A quasi-2D thermal model for the complete refractory lining and outer steel shell is outlined in Appendix A. Thermal modelling.Both the sidewall and the bottom are included.The model is referred to as quasi2-D as vertical heat transport between horizontal brick layers is assumed to be insignificant compared to heat exchange with metal, slag, and radiation and is ignored.The steel shell exchanges energy with the surroundings while the wetted inner refractory layer exchanges energy with the liquid steel and slag.Nonwetted refractory elements are exchanging energy with the top slag-metal surface, and internally, both by radiation.Enthalpy-based conservation models for steel and slag are developed, as detailed in Appendix A. In general, appropriate boundary conditions are developed and outlined in the appendixes. Discrete equations for the slag and metal energy The coupled discrete equations for slag and metal enthalpy (see Equations [64] and [66], Appendix A) can be solved analytically, provided the inner refractory wall temperatures are known.First, we need to establish the relationship between temperatures and enthalpies.This is elaborated in Appendix C. Temperature-enthalpy relationships.As seen from Appendix D. Discrete equations for the slag-metal heat balance, explicit expressions for the slag and metal enthalpies are given by Equation [92].Temperatures are then computed by Equations [75] and [77]. Erosion model The erosion is primarily a result of dissolution and mass transfer from the refractory into the metal and slag.The erosion mechanism considered is mass loss of refractory to the liquid by dissolution.In addition, we have mass losses due to thermal stresses.These may be addressed in a machine learning model, which may exploit the predicted difference between refractory temperature and that of the incoming steel temperature. Refractory loss in the steel wetted region During periods of considerable agitation of the metal and slag (bubble-driven convection, natural convection, electromagnetic stirring) the carbon binder of the MgO-C refractory may be dissolved into the steel.The mass flux of carbon into the steel is locally given by: Here aC is the volume fraction of the refractory that is occupied by carbon, DC is the diffusivity of carbon in steel, and xC is the mass fraction of carbon in the steel. By introducing the concept of a mass transfer coefficient, we may write Equation [9] as [10] Here kC,BL is the mass transfer coefficient for the liquid side boundary layer and xC eq (Twall) is the solubility of C in the steel with its actual composition, and where Twall is the temperature at the inner ladle wall.The temperature is controlled by the steel temperature and the temperature in the refractory brick.As the thermal conductivities of the liquid steel and the wear refractory are of the same order of magnitude (see Table I), the wall temperature will depend on both steel and refractory temperature. For forced convection we may use the mass transfer coefficient suggested by Scalo, Piomelli, and, Boegman (2012) and Shaw and Hanratty (1977), stating that the mass transfer coefficient for Schmidt number Sc > 20 can be approximated by [11] Values for the shear velocities typically range from zero to 0.1 m/s. From Equations [10] and [11] we learn that erosion of the steelwetted ladle wall will increase with increasing gas-stirring flow rate, and temperature (increased C solubility and diffusivity, decreased viscosity). Mass transfer resistance at the interface between MgO-C and steel At the inner surface of the MgO-C bricks the C in the C-continuous domains (graphite and carbon contributions from binder) (Zhang, Marriott, and Lee, 2001) will dissolve into the steel while MgO may be considered as inert.A schematic is provided in Figure 3.As the carbon (from carbon-dominated areas) is dissolved into the steel, the average transport length s pore will stabilize at around a typical MgO particle radius.If the MgO particles are small the convection inside the pore space can be neglected.In this case the transport in the pore space may be represented by pure diffusion.In that case we can write: [12] Here x C IB is the mass fraction of C at the wall, defined at the outer surface made up if the MgO particles protrude out of the C matrix.In this case the mass flows through the inner and outer layers must match, giving: and where the mass transfer coefficient is given by The effective mass transport of C from the MgO-C brick to the steel is then Refractory loss in the slag-wetted region The slag is collected in a relatively thin layer at the surface of the molten bath .Due to the bubble plume, caused by the stirring gas, the slag will be pushed towards the refractory wall.As the bubble plume is asymmetrical, the slag thickness close to the refractory wall will vary along the ladle perimeter.We neglect these complexities and assume complete radial symmetry.The thickness dslag of the slag layer that contacts the refractory can be estimated by: [16] The slag layer will move vertically, driven by waves generated by the bubble plume, as illustrated in Figure 4.The slag layer has thickness dslag and wave amplitude awave. The mass transfer from the wall to the slag layer can be analysed by assuming a developing boundary layer.According to Schlichting (1979) the mass transfer along a developing boundary layer can be expressed as [17] where k is the mass transfer coefficient and x is the distance along the developing boundary layer. DMgO is the diffusivity of MgO into the slag, and is related to the Schmidt number by [18] The explicit mass transfer coefficient is now: By averaging the mass transfer coefficient k in Equation [19] over the thickness of the slag layer we obtain [20] The wave velocity uwave is now estimated by Equation [108] (Appendix E), and the swept distance (amplitude) awave can be represented by lw in Equation [107].It is possible to represent the distribution of mass transfer by a probability distribution.However, as a first approximation we assume that the wave-induced mass transfer applies to a region that extends over the thickness of the slag layer and a region that extends awave both above and below the slag layer.In this case we may estimate the mass transport to the slag to be given over height 2 awave, + dslag, and where the average mass transfer coefficient for this layer is Overall refractory loss model We will track both the MgO and C components of the refractory.We may note that bottom erosion is not included in the model for now.The bottom is included due to its impact on the thermal balance (heat storage).It is assumed that when C is dissolved from the bricks in the steel region a corresponding amount of MgO is released and will end up in the slag.It is also assumed that the density of the bricks is related to C and the corresponding MgO volume fractions (aC, aMgO) and phase densities (rC, rMgO) by [23] where aC + aMgO = 1.The MgO mass loss, MMgO, from a brick element during time dt, eroding a slice of thickness l, is [24] Here A is the total area, and AaMgO is the partial area where MgO is contacting slag. The corresponding loss of C due to loss of MgO is then [25] From Equations [24] and [25] we find that in the slag region the carbon flux out of the refractory wall is given by Carbon balance The carbon is lost from the refractory by two mechanisms, depending on whether we are consodering the steel-wetted or slagwetted zone. [29] The summation is over all the vertical refractory bricks.Here ai,steel is the local steel fraction (which varies with height in the ladle) and aC is the carbon fraction in the refractory brick.Ai = 2pRDxi is the local wall area. MgO balance The MgO is lost from the refractory in a similar way to the two mechanisms above. [30] Here aC is the volume fraction of carbon in the brick, while (1−aC) is the MgO fraction.a * i,slag is the wave-enhanced slag fraction, being in contact with the lining.As a first approach for a * i,slag we use a * i,slag = 0.25ai-1,slag + 0.5ai,slag + 0.25ai-1,slag.The left-hand terms are split and the effect of total mass change entered into the models.In the case of the slag we have: [31] where the mass balance was given by Equation [5].According to these equations we may write Equation [30] as [32] where it is assumed that there is no MgO in the slag tapped from the EAF. Similarly, the mass balance for carbon becomes [33] The solubility of MgO in the slag is given as (see acknowledgements) by [34] Here T is the temperature in degees Celcius.As the slag composition is not known we use a temperature dependency which is approximate for 50 wt%CaO, 10 wt%SiO2, 2.5% FeO, and the balance Al 2 O 3 . Developing sub-models -a multi-scale approach In the present approach we used CFD simulations (Johansen and Boysan, 1988) to obtain the shear stresses along the wall of the ladle.Based on a set of CFD simulations a fitted curve of the vertical shear stress distribution was provided as input for the both the thermal and erosion models (see Appendix B. Wall shear stress model).We did not include the effects of the slag.Using dynamic simulations with slag present more details could be added, and based on curve fitting or lookup tables, the data could be plugged into the model.This would have improved the accuracy. FACT SAGE calculations were performed for the solubility of MgO in the slag (see Acknowledgements).At the present time it was not possible to use this detailed information as we have no information of the composition of the slag arriving at the ladle from the EAF.Based on this it was possible to close the model equations and realize the models. Software The model was coded in python 3, using libraries numpy, pandas, math, pickle, and scipy, and we used matplotlib and vtk for plotting and visualization.The basic version of the model is available on github.com,at https://github.com/SINTEF/refractorywear.The model is licensed under the open source MIT license (https://opensource.org/licenses/MIT). Tuning the model Tables I-III lists the physical and thermodynamic data that was used. Unfortunately, detailed geometrical data and process data cannot be given due to company confidentiality.In order to apply the model to single heats, operational data from Sidenor was read.The static data included steel mass, time with steel in the ladle, temperature of the steel before leaving the EAF, and cyclic data for vacuum pressure, heating power, measured steel temperatures, gas flow rates, and mass and composition of additions, all versus time.The simulation was initiated at the time when the ladle was filled with liquid steel from the EAF and run for 2 hours.Once the casting process is finished, the ladle is considered to be empty, but still losing heat. As there is no data on the initial slag mass or composition, it was not possible to incorporate changes in slag composition in the model.The initial slag mass was therefore assumed to be always 500 kg.Another consequence was that we had to assume constant solubilities of C in steel and MgO fraction in the slag.As a result, the solubility of MgO in the slag depends only on temperature (see Equation [34].Furthermore, all additions were assumed to contribute to the slag.This is acceptable if the alloy additions are of same order of, or smaller than, the pure slag contribution.However, for special steels, addition levels are significant and the model should be updated such that additions are transferred to the metal. Different additions have different thermodynamic properties, such as melting temperature and melting enthalpy.As this information was largely unknown, we used the same melting temperature and heat of melting for all additions. First, we tuned the steel temperature as a good thermal prediction was a prerequisite for the erosion model.At the beginning of each heat it was found that the initial temperature in many cases was a leftover from the previous heat.We therefore decided to use the temperature measured in the EAF, decreasing this by 50 K due to heat loss during tapping.For heats where the initial temperature was unavailable or resulted in large temperature residuals, the initial temperature was corrected in an iterative manner until the residual for average relative temperature was below 20 K.The residual was computed from all measured values except the first, which was not reliable.In both Figure 5 and Figure 6 we see successful simulations, showing zero-order residuals of 5 K and 3 K, respectively.The first-order residuals (RMSE) are similarly 7 K and 5 K.In both cases the initial temperature was optimized, but for heat 206217 the 'measured' initial steel temperature was quite close to the optimized initial temperature.To obtain these results, the thermal efficiency of the heater was reduced to 85% and the thermal conductivity of the refractory bricks and insulation was increased significantly (see Tables I and III). In the second step, the erosion model was tuned.We decided to work with constant solubilities of C in steel (soluble mass fraction was set to 0.1), while the MgO solubility in the slag is based on a fixed slag composition and only varies with temperature (see Table II).As we decided to keep the solubility of C in the steel constant, the only tuning that was possible is the pore diffusion length s pore (see Equation [12] and Figure 3). This tuning was done as follows: i) Start with simulating the preheating of the ladle ii) Look up the heat ID, then read operational data for the heat and simulate temperature and erosion iii) Based on the erosion data, reduce the radial cell sizes for the three inner bricks (wear bricks) iv) Account for the thermal history of the ladle until next heat v) Repeat step (ii) for the next use of the specific ladle (next heat in campaign, and where the campaign number is unique for the wear lining, from relining until demolition), and then accumulate the erosion of the bricks vi) If the ladle was taken out for repair of some bricks, the repaired bricks are also repaired in the model.After repair the temperature is again initialized vii) Repeat step (v) until the ladle is taken out for lining demolition.At this time the predicted erosion profiles are saved and compared to data from the demolition.In the demolition data, the ladle is segmented in two halves, where 'Left' is close to the porous plug while 'Right' is away from the plug.In addition, the brick with the most erosion in each half is registered.In this way, a maximum erosion is recorded and the average value for each brick row is not known.However, the 2D model can only be compared with the average of the two and should have some underprediction due to the above observation.For the selected tuning factor s pore we see the prediction in Figure 7 is good, both qualitatively and quantitatively.The shape of the erosion in the steel region, below the slag line, is typical for all ladles and campaigns.We note that for bricks 36−40 the erosion level is quite high.This is above the liquid steel level and is a result of metal splashing, causing thermomechanical cracking, and disintegration due to the vacuum treatment (Jansson, 2008). In Figure 8 we see the prediction from a campaign where the erosion in the steel section (bricks 10−25) is underpredicted.Note that in this case, brick numbers below 9 were not measured.The underprediction could be a result of the different steel qualities treated in this specific campaign, or because for some reason the variation along the perimeter, at each brick layer, is larger than usual.As we have no data on the erosion from heat to heat, we cannot tell if this happened during specific heats in the campaign.Another interesting feature, seen in both Figure 7 and Figure 8, is the pronounced dip of erosion around brick 16 and 17.This may be a result of the addition of alloying materials when the ladle is approximately 1/3 full.Alloying elements and slag may stick to the colder wall long enough to protect the lining somewhat. Model performance against Sidenor operational data The model was run with all available Sidenor data for 2019.The production campaigns that started in 2018, or ended in 2020, were omitted from the current data-set as those campaigns were not complete.Altogether, we analysed 5216 heats, involving 11 different ladles and 61 campaigns.Averaged erosion over bricks 5-25 is compared in Figure 9.An outlier (ladle 8, campaign 76), marked A, is seen, where the details were already shown in Figure 8.We compare the average erosion per heat in Figure 10, as distributed over the number of heats in each campaign.The model predicts a variation of ±12%, while the data has a variation of ±18%.The outlier A from Figure 9 is clearly seen. Figures 7 and 8 show a peak in erosion close to the surface of the steel where the slag is located (around brick 35).The steel mass in the ladle varies from heat to heat, but in cases when the reported mass is low this may be due to operation challenges during casting.Therefore, the minimum steel mass is set to 110 t.This introduces another uncertainty in the predictions.Now, it may seem that the erosion does not change much from heat to heat, as indicated from 11 that the predictions show significant difference in amount eroded, and erosion pattern, from heat to heat.Around brick 25 (steel-wetted region) the erosion for use number 17 is around twice as high as for use 69.This difference is mainly due to temperature, time with vacuum, gas flow rates, and operational times.However, when averaged over a complete campaign, these variations are significantly reduced. Discussion The model predicts a smooth increase in erosion rate from the bottom towards the slag.This is in very good agreement with some of the measured erosion profiles.Figure 7 shows one example.This is a result of the bubble-driven flow, enhanced by vacuum, the transport processes in the brick (represented by s_pore, or s pore), and in the flow boundary layer, as well as the solubility of carbon in the steel.We used an artificially high value for the saturated carbon mass fraction (X C eq = 0.1).However, similar results to those shown here may be obtained by another combination of s pore and X C eq .We see above that the model performs quite well.At the same time, there is room for improvements.The most obvious improvements are: i) Modelling of the slag composition and adding the solubility of MgO in the slag as a function of composition.However, this requires knowledge of the composition of the slag tapped from the EAF.ii) Separating additions into slag formers and alloying elements, and updating the enthalpy-temperature relationships to represent the true compositions of slag and metal.iii) Empirical slag temperature is needed to calibrate and validate the slag temperature predictions.iv) Including the solubility of carbon in the steel.Data for the steel composition is available but the carbon solubility for different compositions must also be available.Some features seen in the data, such as shown in Figure 12, cannot be reproduced by the model.The very high observed erosion rates close to the bottom cannot be explained with the available information about the operation.It is possible that gas purging was done with a very low steel level and containing slag.Such issues can be regarded as abnormal operations.Other possibilities are excessive mass loss during ladle cleaning, or that the lining brick quality was not consistent for a period. Conclusions and recommendations The presented model predicts the evolution of lining erosion fairly well.Much better agreement between model and data is hard to obtain due to uncertainties in operational data, physical data, and measurements.The model is primarily predicting lining erosion based on hydrodynamics and solution of lining elements in steel and slag.The contribution from thermomechanical cracking (thermal spalling) of the lining is not included in the model.However, the model predicts lining temperatures at the time metal is tapped into the ladle.This information can in the future be used to assess thermomechanical brick degradation.As this effect was not included, the model was tuned to predict less erosion than what is observed.Similarly, the lining degradation above the melt, which is particularly pronounced during vacuum treatment, was not included in the model.However, a hole in the lining this far up on the ladle wall has far less serious consequences than holes deep below the steel surface. Ladle sidewall energy model The ladle sidewall is built with a number of radial layers, as shown in Figure 13.Next, we let the numerical grid, as seen in the figure, represent each vertical layer of wear bricks, and stack multiple layers on top of each other to represent the entire sidewall of the ladle.The colours in Figure 13 represent different properties of the materials.The bottom part of the refractory is built of a stack of discs, which also may be represented by Figure 13, but now rotated 90 o clockwise.In this manner, the numerical grid for the ladle wall and casing temperature will consist of a single one-dimensional grid (here 7 cells) for the bottom and N one-dimensional grids for the vertical wall (N × 7 cells).For the horizontal and radial heat balance we have where rk is defined according to Figure 13: [38] In the cell contacting the hot liquid steel and slag (k = 1) we have [39] In Equation [39] ai,metal, ai,slag, and ai,gas are the local volume fractions of the phases contacting the element Dxi at a given time. [40] [41] [42] where the external temperature is given by TEXT.The radiation heat transfer coefficient is given by [43] and where the wall temperature is further approximated by the temperature in the near wall cell at the previous time step: [44] For the outer wall at yNJ = y7 (steel casing) we have: It should be noted that the external heat transfer coefficients must be adjusted to the situation the ladle experiences (melt refining, transport to casting station, casting, transport to waiting station, waiting).If the external heat transfer conditions varies between the different events, this must be handled in an appropriate manner such that we can tune the model to get a realistic thermal history for the ladle. Ladle bottom energy model The model for the bottom energy is completely analogous to that described above, but now with the discrete equation [47] Here R is the inner radius of the ladle.For the element close to the liquid steel (we assume that steel flows into the bottom at time = 0.0 s) we have: [48] [49] It is further assumed that the ceiling (ladle lid) is adiabatic and that the slag and metal are well mixed.However, for the refractory bricks the thermal conduction heat flux into the inner wall surface brick and the net radiation flux must balance.The surface temperature of the wall bricks is then given as [57] This illustrates the fact that it is surface temperature that communicates radiation and not volume-averaged temperature for the computational cell. The factor Ψ is given by where R 2 2 Dq is the horizontal area element (per radian) which exchanges radiation between slag/metal and bricks. The heat flows may be converted to heat transfer coefficient by rewriting Equation [55] as [59] where h ~2→1 is the heat transfer coefficient expressed by the bracketed terms in Equation [59] above. As the lid is adiabatic we have the following condition to fulfil: [60] From Equation [60] we compute the ceiling temperature. Effective heat transfer coefficient The effective heat transfer coefficient h ~liq in the liquid steel and slag may now be estimated based on three different contributions: 1.The wave induced contribution h ~wave, elaborated in Appendix F. Wave-induced heat transfer 2. The heat transfer due to bubble stirring h ~stirring, elaborated in Appendix G. Inner wall heat transfer coefficients due to forced convection by bubble stirring 3. The heat transfer due to natural convection hNC, elaborated in Appendix E. Pure natural and effective convection heat transfer': [61] Heat balance for the slag Due to the melting of additives (slag formers, refining additions, alloying elements) we have selected to represent the energy by the specific enthalpy h.First, we give the slag enthalpy by a simplified relationship: [62] For the bottom element (steel shell) we have: [50] where we estimate where we estimate [51] Radiation -Wall temperatures and heat transfer above the slag/metal Above the liquid phase the refractory will only see the top lid, the other parts of the wall, and the metal surface.We will assume that the top lid is adiabatic, such that no energy is drained out through the lid.We now have to assess radiation transfer between different inner wear bricks and the top surface of the slag/metal.The radiative flux from a surface with emissivity ep and temperature Tp is given by [52] The radiation heat flow from surface elements A1 to A2 is given by (Wikipedia, 2022) [53] The geometrical configuration is seen in Figure 14.The radiation heat flow from A2 to A1 is then [54] The heat flow between the two surfaces A1 and A2 can be given by (Goodman, 1957): [55] Based on Equations [52]−[55], the surface normal vectors n1 and n2, and the vector connecting area elements dA1 and dA2, all radiation heat flows can be computed.These are Q . w,m→slag-metal (from brick number m to slag-metal interface), Q . w,m→ceiling (from brick number m to ceiling), and Q . slag-metal→ceiling (from slag-metal interface to ceiling).The direct radiation between bricks is ignored.The radiation from the slag-metal interface must respect that the slag only covers a fraction aslag of the total free surface area.Hence, the radiation temperature T 4 slag-metal is replaced by: [56] Figure 1 - Figure 1-Left: Cross-section of a typical steel ladle, with wear refractory bricks, permanent lining (between wear bricks and steel casing), steel casing, bottom bricks, bottom plug for bottom gas blowing, and slide gate for transfer into casting tundish.Right: Hot ladle that has been in use and is waiting for the next heat.Maximum steel capacity is around 150 t and M .steel,EAF are the transient mass flow rates of slag and steel coming into the ladle during tapping from the EAF.m .slag,k is the mass flow rate of added slag former of type k. Figure 2 - Figure 2-Idealized, simplified ladle, showing slag (red), metal (blue), gas bubbles, heating elements, and refractory (brown) the explicit wave contribution to mass transfer, the impact of the bubble-driven flow (slag version of Equation[11]) must be added:[22] Figure 3 - Figure 3-(Above) MgO particle in a C matrix.The flow of liquid steel is shown on the LHS.(Below): Illustration of C that must diffuse through channels between MgO grains to reach the inner side of the flow boundary layer.The vertical arrow indicates the steel flow.Horizontal arrow indicates diffusion flux [26] the volume flows of the carbon and steel are equal.However, the surface areas are different due to the actual volume fractions.The mass flow of carbon, per unit surface area, to the liquid in the slag region is then[27] Similarly, the loss of MgO in the steel region due to carbon dissolution is[28] Figure 10 - Figure 10-Comparison between 'per heat averaged' measured and predicted erosion thickness at time of demolition of wear lining.Symbols represent different ladle numbers.Outlier A is marked ], or [bar] V volume [m 3 ] M mass [kg] D diffusivity [m 2 /s] k mass transfer coefficient [m/s] J mass flux [kg/m 2 s] Nu Nusselt number -represent the value at the positive and negative sides of the cell face.Dxi is the vertical height of the grid cell at level i cell while rk is the radial position index for the cell.We use harmonic averages for the cell face thermal conductivities and l - the external heat transfer coefficient is estimated by the sum of natural convection and radiation.The convective external heat transfer coefficient hNC is given by Equation [104] using the properties for air.The dimension used in the convective model should be the half height of the ladle standing straight up.The effective external heat transfer coefficient is then [46]When the ladle is located inside a cabinet, within a compartment with external walls, the effective heat emissivity in Equation[46] can be multiplied by a factor of 0.5. Figure 13 - Figure 13-Element of the refractory where the transient thermal heat balance is addressed Figure 14 - Figure 14-Geometrical arrangement for radiation exchange between areas and (Wikipedia, 2022) Table II Solubilities Carbon solubility in steel: x C eq (T wall ) 0.1 eq,slag MgO solubility in slag: xMgO (T wall ) See Equation [34]
2024-05-02T15:11:06.061Z
2024-04-23T00:00:00.000
{ "year": 2024, "sha1": "61afab2f8ef8ae24f4a6ddbf7087930c18738831", "oa_license": "CCBY", "oa_url": "http://www.scielo.org.za/pdf/jsaimm/v124n3/04.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1cec452f570160cae316a8e9167db6b0822d6b60", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [] }
59448625
pes2o/s2orc
v3-fos-license
Determination and Quantification of 5-Hydroxymethylfurfural in Vinegars and Soy Sauces The organic compound 5-hydroxymethylfurfural (HMF) can be formed from sugars under Maillard reaction and caramelization. In order to study the formation regular of HMF in sugary liquid condiment, vinegar and soy sauce were selected. High-performance liquid chromatography (HPLC) was used to determine the HMF concentrations of various brands of soy sauce and vinegar. The result showed that HMF concentrations were in a range of 0.42 to 115.43mg/kg for vinegar samples and 0.43 to 5.85mg/kg for soy sauce samples. The concentrates of HMF were expressed in zero-order kinetics model at 100C before the maximum HMF generation in all of the tested samples. Longer heating treatment timewould reduce theHMF content in tested samples. In addition, HMF content had obviously positive correlation with sugar contents in vinegar samples, but no similar rule was found in soy sauces. Introduction As typical representatives of liquid condiment, vinegar and soy sauce have gained lots of acceptance and popularity over China for hundred years due to their unique qualities [1]. Vinegar can be fermented from various materials such as rice, glutinous rice, and sorghum.After two steps of fermentation, alcoholic fermentation and acetic fermentation, vinegar was produced [2].Research has suggested that acetic acid is the main ingredient in vinegar, which gives it an acidic taste [3].Vinegar is extensively used worldwide as a condiment, acidulant, and food preservative [4].Every year over 26 million hectoliters of vinegar is produced and more than 3.2 million liters of vinegar is consumed every day in China [5]. Soy sauce is a fermented soybean food.According to the method of manufacture, the naturally brewed method and the acid hydrolysis method were classified [6].Although the acid hydrolysis method makes the production of soy sauce very cheap, fermented soy sauce was more popular due to its intense umami taste, characteristic aroma, and nutritional value.During the fermentation process, soy protein is enzymatically degraded to amino acids, including glutamic acid and aspartic acid, and wheat polysaccharides are enzymatically degraded to monosaccharides, including glucose [1].What is more, soy sauce usually contains added caramel, in some cases molasses, to give it a distinctive appearance [7].The finished product was pasteurized at a rather high temperature (80 ∘ C) [8]. 5-Hydroxymethylfurfural (HMF), a food contaminant produced by caramelization and Maillard reaction, is considered as a potential carcinogen for humans (Zou and others 2015).The previous literature has shown that it was concluded that sugary food heated under household cooking conditions could act as an initiator and promoter of colon cancer because of the presence of HMF [9].The formations of HMF were inevitable in vinegar and soy sauce, which were seen as essential liquid seasonings for home cooking.Theobald et al. [9] also indicated that it was difficult to estimate the HMF content in commercial samples which was prepared in households. In order to study the formation regular of HMF in liquid condiment, 10 vinegars and 10 soy sauces were selected in the present study, and high-performance liquid chromatography (HPLC) was used to determine the HMF concentrations of all of tested samples.The pH values, moisture levels, and Materials and Methods 2.1.Materials.All vinegar (A1-A10) and soy sauce (B1-B10) samples are Chinese-style brewed and were best sellers in Chinese markets in 2017.High purity (>99%) HMF standard were purchased from Sigma-Aldrich (St. Louis, MO, USA).Methanol (HPLC grade) was provided by Thermo Fisher Scientific (USA).Ultrapure water was purchased from Hangzhou Wahaha Group Co., Ltd.Deionized water was obtained in house. Preparation of Heating Treatment Samples.The samples of 30 g vinegars in 100 mL flask were heated at 100 ∘ C from 10 to 60 min, while soy sauces were heated from 20 to 120 min.After the heated time finished, the samples were removed from the bath for determination with HPLC. Measurement of pH. The pH was measured at 20 ∘ C using a Mettler Toledo S40 SevenMulti6 pH meter (Beijing Songxinhongze Technology Co., Ltd., Beijing, China) after appropriately removing the sediment layer.The meter was calibrated with pH 4.0 and 7.0 buffers. Measurement of Moisture Levels and Sugar Content. Chinese National Standard methods were employed to evaluate the general components of samples.Moisture level was determined by electrothermal constant-temperature blast drying oven (WG9220A) according to the change in weight after drying for 20 h.The sugar content was determined by thermal ion chromatograph (IC-3000) according to the GB/T22221-2008 Chinese National Standard. HMF Analysis. The analysis of HMF was performed using the method proposed by Gökmen and S ¸enyuva [10] and Gökmen et al. [11] with modifications.Here, a methanol water solution (5 : 95, v : v) was employed rather than an acetonitrile and acetic acid aqueous solution (10 : 90, v : v), and HMF was well separated at a flow rate of 1.0 mL/min. Each of the vinegar and soy sauce samples (20 g) was placed into a 50 mL centrifuge tube and covered with a cap.The sample was maintained continuously at 4 ∘ C while being shaken vigorously for 3 min and then centrifuged for 15 min at 9600 rpm.Then the supernatant solution and methanol (v : v, 1 : 1) were mixed and shaken for 3 min.The supernatant solution was filtered through a 0.45 m disk filter and stored at 4 ∘ C until conducting the analysis.All experiments were conducted in triplicate. The concentration of HMF was obtained by comparing the retention time value and the UV spectrogram with those of the appropriate standards.The peak area values obtained from the various HMF standards were employed to construct a standard curve, as shown in Figure 1(d). 2.6.Statistical Analysis.Analysis of variance (ANOVA), regression analysis (curve fitting), and the calculation of kinetic rate constants were performed using the Microcal Origin 8.0 software (Origin Lab., Northampton, MA, USA).ANOVA test was performed for all experimental runs to determine significance at 95% confidence interval.All experiments were performed in triplicate. Results and Discussion Figures 1(a), 1(b), and 1(c) depicted the chromatographic separation of HMF in a representative standard solution and in representative commercial vinegar and soy sauce samples, respectively.The chromatograms clearly indicated that HMF was completely resolved from all other components of the vinegar and soy sauce samples.HMF eluted at approximately (18.899 ± 0.11) min ( = 10) with good retention time reproducibility. pH Value, Sugar, and Moisture Content in Vinegars. The pH, sugar, and moisture content in the vinegar samples were listed in Table 1.Vinegar was obtained by a double fermentation process (alcoholic and acetic fermentation) of sugary and starchy substrates [2].The key ingredient is acetic acid, which gives it an acidic taste, although there may be additions of other kinds of acid like tartaric and citric [12,13].Compared with other condiments, vinegar had a lower pH value.As shown in Table 1, the pH value of the 10 vinegars was in the range 2.743-3.598.Although the result of the test was similar to what was reported by Lalou et al. (2.83-3.53)[14], there was significant difference between different tested vinegar samples ( < 0.05).There were great variations on moisture content in vinegar samples, too.The highest amount of moisture was 98.292% (sample A10), while the lowest was 89.010% (sample A8).Previous research had shown that not only pH value and moisture content but also varieties and contents of sugar have effects on HMF formation [15][16][17].And numerous studies have indicated that fructose is the most reactive sugar relative to sucrose and glucose, in the formation of HMF under acidic conditions [18].Therefore, sugar content was also considered in this study.In Table 1, the sugar data obtained were listed.The main sugars were glucose and fructose in vinegars.As for sucrose, maltose, and lactose, concentrations were very low in vinegars. Initial HMF Concentrations of Vinegars. Vinegar has high antioxidant activity, antimicrobial properties, antidiabetic effects, and therapeutic properties [19], which made it widely used as acidic seasoning.As all food matrixes used for the production of vinegar contain sugar, formation of HMF during either the production process or storage is possible. The initial HMF concentrations of the 10 vinegars ranged from 0.42 to 115.43 mg/kg (Figure 3(a)).The concentrations of HMF in vinegar samples have been reported by many scholars, and values ranging from 0 mg/kg up to an extremely high value of 14,145 mg/kg have been reported, which was an indicator of different practices exercised by different manufacturers and the lack of consistent process optimization.The range of HMF concentrations for vinegar samples obtained in our study was much lower than that reported by Lalou et al. (211-14,145 mg/kg) [14].Caligiani et al. [20] reported a range of HMF concentrations of 0-3.38 mg/kg for 105 vinegar samples, and they concluded that their method based on proton nuclear magnetic resonance spectroscopy was satisfactory for the quantification of HMF concentration.Bignardi et al. [21] reported a range of HMF concentrations of 0.82-1,153.55mg/kg for 3 vinegar samples, which was also greater than that obtained in the present study.In addition, the HMF concentrations reported by Masino et al. [22] for 10 vinegars were found to be significantly greater than the values determined by others.Theobald et al. [9] analyzed the HMF content in various kinds of vinegar.They found that balsamic vinegars exhibited very high HMF concentrations of about 300 mg/L, and, depending on their age, the concentrations of HMF ranged upwards to 5.5 g/kg. HMF Concentrations in Heating Treatment Vinegars.The HMF concentrations in heating treatment vinegars were shown in Figure 3(a), while kinetic analyses of HMF were shown in Table 2.As shown in Figure 3(a), it clearly indicated that HMF concentrations in most of vinegars gradually increased with extension of the heating treatment time.However, HMF concentrates in samples A1, A2, A6, and A9 were showing a trend of increasing first and then decreasing. The highest content of HMF in samples A1 and A6 were 58.28 mg/kg and 9.08 mg/kg (40 min), and the HMF content was reduced by 2.93% and 20.70% when continuously heated for 20 min, respectively.In addition, the time (50 min) of the highest content of HMF formed was similar in samples A2 and A9.The HMF contents of them were reduced by 2.10% and 5.96% when continuously heated for 10 min, respectively.Conclusion was drawn that longer heating treatment time would reduce the HMF content, which might be due to the occurrence of the side reactions of HMF.With Table 2 analysis, the data was expressed in zero-order kinetics model at 100 ∘ C before the maximum HMF generation in all of vinegar samples.Among the 10 vinegars, the maximum formation rate of HMF was 1.0961 (sample A8), and the minimum was 0.0361 (sample A10).The cause might be related to moisture and sugar content.Higher sugar content, as well as lower moisture content, had effects on promoting HMF formation. Effect of Sugar on HMF Contents in Vinegars. HMF content after thermal treatment for the longest time and the initial sugar content were shown in Figure 2(a).In our study, concentrations of HMF had obviously positive correlation with sugar contents in vinegars.For all of tested vinegars, the highest HMF content was 181.04 mg/kg (sample A8), followed by 79.53 mg/kg (sample A9).The highest sugar content was 1.283% (sample A8), where 0.990% of glucose and 0.293% of fructose were included.And the second highest sugar content was 0.667% (sample A9), where only glucose was included.The effects of different kinds of sugar on HMF formation were investigated in sponge cake models under the heating treatment [23].The result showed that sucrose, lactose, and maltose yielded less HMF than did glucose and fructose.In Locas's study, fructose is the most reactive sugar relative to sucrose and glucose, in the formation of HMF under acidic conditions [17].These findings were good to support our conclusion that HMF was more readily formed at acid system which was rich in sugars, especially fructose.: the time of heat treatment (min); c k: rate constant (min −1 ); d R 2 : regression coefficient; e Af: the accuracy factor; f Bf: the bias factor; g SS: the sum of the squares of the differences of the natural logarithm of observed and predicted values; h RMSE: the root mean square error. pH Value, Sugar, and Moisture Content in Soy Sauces. The pH, sugar, and moisture content in soy sauce samples were listed in Table 1.Soy sauce production involved vigorous lactic and alcohol fermentation.During the fermentation process, lactic acid bacteria produce lactic acid and acetic acid, which leads to pH decrease [1].As shown in Table 1, the pH value of the 10 soy sauce samples was in the range 4.334-5.122,which was higher than the highest pH value (3.598) in 10 vinegar samples.There was significant difference between different soy sauce samples ( < 0.05).Lu et al. [24] analyzed the pH value of 40 samples of Chinese soy sauce; a range of pH value of 3.86-4.98was detected.This result was also higher than the pH value in tested vinegars.The moisture content in soy sauces varied from 62.267% to 74.543%, which was 26.73% lower than the moisture content of tested vinegars.In Kim and Lee's study, moisture content in soy sauce showed a decreased trend during fermentation process [25].That could be one reason that moisture content was so different in different soy sauces.In soy sauce production, soy protein was enzymatically degraded to amino acids, and wheat polysaccharides were enzymatically degraded to monosaccharides [1].In Table 1 the sugar data obtained in tested soy sauces were listed.The main sugars were glucose, fructose, and sucrose in soy sauces.And maltose was also detected in sample B6.The highest sugar content was 5.597% (sample B3), in which 1.150% of glucose, 0.137% of fructose, and 4.310% of sucrose were included. Initial HMF Concentrations of Soy Sauces. As a condiment, in order for a soy sauce to have palatable taste, about half of its nitrogenous compounds must be free amino acids; in particular, glutamic acid is a very important component [26].Studies showed that the rates of HMF formation from glucose and sucrose showed enhancement in the presence of the amino acids, especially acidic amino acids [17].It was inevitable that HMF was formed in soy sauce, which was abundant in sugar and amino acid.The initial HMF concentrations of the 10 soy sauces ranged from 0.43 to 5.85 mg/kg (Figure 3(a)).No significant differences in the HMF concentrations were observed for sample B5 (0.49 ± 0.06 mg/kg) and sample B8 (0.43 ± 0.01 mg/kg).The highest HMF concentration was obtained for sample B2 (5.85 ± 0.11 mg/kg), while the lowest was for sample B8 (0.43 ± 0.01 mg/kg).Compared with vinegars, scarce information was available in soy sauce about the content of HMF.Wang et al. [27] reported a range of HMF concentrations of 0-47.56 mg/kg for 6 kinds of Chinese soy sauce.Goscinny et al. measured the content of HMF in Bouillon sauce, and nonquantifiable traces were observed [28].In the production processing of soy sauce, it usually contains added caramel, in some cases molasses, to give it a distinctive appearance [7].Most of the existing literatures focused on the content of HMF in caramel [29,30]. HMF Concentrations in Heating Treatment Soy Sauces. The HMF concentrations in heating treatment soy sauces were shown in Figure 3(b), while kinetic analyses of HMF were shown in Table 2.As shown in Figure 3(b), it clearly indicated that HMF concentrations in most of soy sauces keep a continued increase trend with extension of the heating treatment time.However, HMF concentrates in samples B1, B3, B4, B5, B9, and B10 were showing a trend of increasing first and then decreasing.The highest content of HMF in samples B1, B4, B9, and B10 were 14.53, 8.99, 6.11, and 4.27 mg/kg (80 min), and the HMF content was reduced by 20.10%, 18.69%, 0.82%, and 41.55% when continuously heated for 40 min, respectively.In addition, the time (100 min) of the highest content of HMF formed was similar in samples B3 and B5.The HMF contents of them were reduced by 12.67% and 9.56% when continuously heated for 20 min, respectively.The same conclusion with vinegar sample was drawn that longer heating treatment time would reduce the HMF content.With Table 2 analysis, the data was expressed in zero-order kinetics model at 100 ∘ C before the maximum HMF generation in all of soy sauce samples.Among the 10 soy sauces, the maximum formation rate of HMF was 0.6618 (sample B6), and the minimum was 0.0150 (sample B10).The cause might be related to moisture and sugar content.In addition, previous study showed that the addition of alanine as a catalyst to maltose solution gave rise to an increase in concentrations of HMF [31]. Effect of Sugar on HMF Contents in Soy Sauces . HMF content after thermal treatment for the longest time and the initial sugar content were shown in Figure 2(b).For tested soy sauces, the highest HMF content was 87.29 mg/kg (sample B6), followed by 61.51 mg/kg (sample B2).The highest sugar content was 5.597% (sample B3), where 1.150% of glucose, 0.137% of fructose, and 4.310% of sucrose were included.And the second highest sugar content was 2.081% (sample B5), where 1.180% of glucose, 0.628% of fructose, and 0.273% of sucrose were included.However, no association was found between the concentration of HMF and the sugar content.The cause might be that the transforming capacity of sugars was different under the conditions of different pH, amino acid, moisture content, and so on. Conclusions The HMF concentrations of 10 vinegars and 10 soy sauces were evaluated by HPLC.Significant differences ( < 0.05) were observed in the HMF concentrations of both vinegar and soy sauce samples, which exhibited a wide variability in a range of 0.42 to 115.43 mg/kg for vinegar samples and 0.43 to 5.85 mg/kg for soy sauce samples.Except for samples A4, A5, and A6, the HMF concentrations in the vinegar samples were all greater than those in soy sauce samples.The results showed that the HMF concentrations in soy sauce samples were below the maximum amount allowed in Chinese national standards (40 mg/kg).However, in the tested vinegar samples, only 4 samples were below this level. In our study, concentrations of HMF had obviously positive correlation with sugar contents in vinegar samples.Longer heating treatment time would reduce the HMF content in all of tested samples.The concentrates of HMF were expressed in zero-order kinetics model at 100 ∘ C before the maximum HMF generation in both vinegar and soy sauce samples. Figure 3 : Figure 3: HMF contents of thermal process models (vinegar samples (a), soy sauce samples (b)) at different heating time. Table 1 : pH value, sugar, and moisture contents in vinegars and soy sauces. Different letters indicate that there was significant difference between different kinds of vinegar or soy sauce in "pH value" or "moisture contents." Table 2 : Summary of kinetic analysis of HMF formation in different heating treatment vinegar and soy sauce samples. R2, Af, Bf, SS, and RMSE: indications of reliability and accuracy of models; a T: the heating temperature ( ∘ C); b C(t): the HMF content (mg HMF per kg sample);
2018-12-28T21:48:18.416Z
2017-08-16T00:00:00.000
{ "year": 2017, "sha1": "626a8a0885e5475b4d962e1e044700231256b7e5", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jfq/2017/8314354.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "626a8a0885e5475b4d962e1e044700231256b7e5", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
233364888
pes2o/s2orc
v3-fos-license
Differences in RNA and microRNA Expression Between PTCH1- and SUFU-mutated Medulloblastoma Background/Aim: Germline mutations in PTCH1 or SUFU in the sonic hedgehog (SHH) pathway cause Gorlin’s syndrome with increased risk of developing SHH-subgroup medulloblastoma. Gorlin’s syndrome precludes the use of radiotherapy (a standard component of treatment) due to the development of multiple basal cell carcinomas. Also, current SHH inhibitors are ineffective against SUFU-mutated medulloblastoma, as they inhibit upstream genes. In this study, we aimed to detect differences in the expression of genes and microRNAs between SUFU- and PTCH1-mutated SHH medulloblastomas which may hint at new treatment directions. Patients and Methods: We sequenced RNA and microRNA from tumors of two patients with germline Gorlin’s syndrome – one having PTCH1 mutation and one with SUFU mutation – followed by bioinformatics analysis to detect changes in genes and miRNAs expression in these two tumors. Expression changes were validated using qRT-PCR. Ingenuity pathway analysis was performed in search for targetable pathways. Results: Compared to the PTCH1 tumor, the SUFU tumor demonstrated lower expression of miR-301a-3p and miR-181c-5p, matrix metallopeptidase 11 (MMP11) and OTX2, higher expression of miR-7-5p and corresponding lower expression of its targeted gene, connexin 30 (GJB6). We propose mechanisms to explain the phenotypic differences between the two types of tumors, and understand why PTCH1 and SUFU tumors tend to relapse locally (rather than metastatically as in other medulloblastoma subgroups). Conclusion: Our results help towards finding new treatable molecular targets for these types of medulloblastomas. Both PTCH1 and SUFU are vital players in the activation of the sonic hedgehog (SHH) pathway, which is one of the main trafficking networks that regulate events during embryonic development, and aberrations in its regulation may cause congenital disabilities and cancer. Activation of the SHH signaling pathway is mediated by the receptor Smoothened (SMO). When the SHH ligand is low or absent ("off-state"), SMO transports to the membrane, where its activity is inhibited by Patched (PTCH1). The downstream effectors are inhibited via SUFU, resulting in inhibition of target gene expression. When SHH binds to Patched ("onstate"), SMO levels increase, and SUFU is deactivated, leading to activation of gene expression, resulting in cell growth and the patterning of multicellular embryos (6). One of the four subgroups of MB -the most common malignant brain tumor in children (7) -is the SHH subgroup, which is most frequent in infants (<3 years old) and young adults (>16 years old). Mutations in PTCH1 or SUFU are frequent in the tumors of infants with SHH-MB (8). Although most of these mutations are sporadic, SUFU and PTCH1 germline mutations can be detected in 2% of all patients with MB, exclusively in the SHH subgroup (9). The risk of developing MB has been suggested to be 20 times higher in germline SUFU mutations and at a younger age than germline PTCH1 mutations (4,10). MBs with a germline SUFU mutation show poor prognosis with overall survival rate of 66% (10), which is much lower than the >90% overall survival rate reported for desmoplastic MB in young children (11). These children often demonstrate local relapse with progression-free survival of 42% at five years, and they will most likely need radiation for salvage therapy (10). Standard MB therapy for children over three years old includes surgical resection, upfront craniospinal irradiation, chemotherapy, and high-dose chemotherapy with hematopoietic stem cell rescue in high-risk patients (12). Due to the enormous cognitive damage of radiation in infants, treatment is usually based on chemotherapy alone. However, some children will relapse or progress and would need subsequent radiation therapy. Children undiagnosed with GS will develop hundreds to thousands of BCCs in the irradiated areas. Therefore, it is vital to identify those children with germline PTCH1/SUFU mutations to avoid irradiation at all costs. Recently, new SHH inhibitors have been developed for the treatment of SHH-MB (13). However, these are SMO inhibitors and, therefore, will only inhibit the upstream activation of the pathway -e.g., at the level of PTCH1 or SMO -and will not affect downstream mutations, such as SUFU (14). Also, the SMO inhibitors cause irreversible growth plate fusion in children and, therefore, clinical studies are employing these agents for skeletally mature children only (15). There is a desperate need for new therapies for young children with GS-SHH-MB that should avoid radiation therapy, and, in particular children with germline SUFU mutation, who will not respond to SMO inhibitors. We aimed to find new potential molecules such as microRNAs (miRs), to serve as diagnostic biomarkers or as drug targets (16,17). miRs are short noncoding RNAs, which play an essential role in gene translational regulation. Moreover, miRs can be used to define specific signatures for individual cancers and cancer stages (18,19), including for MB subgroup classification (20). In this study, we searched for targetable pathways in the tumors of two patients diagnosed with SHH-MB, one with a germline SUFU mutation and the other with a germline PTCH1 mutation. We aimed to detect similarities and differences at the expression levels of genes and miRs to better understand the biology of these two tumors that could help develop targets for future clinical use. Patients and Methods Patients and tumor collection. The study design adhered to the tenets of the Declaration of Helsinki and approved by the institutional and national review board of the Israel Ministry of Health. Informed consent was obtained. Primary tumor samples were collected at surgery, placed in RNAlater™ (AM7020; Thermo Fisher Scientific, Waltham, MA, USA), and stored at −80˚C. RNA and microRNA extraction and sequencing. Total RNA was extracted from freshly frozen tumor tissue samples as previously described (20). Library preparation and sequencing was performed using the Illumina TruSeq protocol on the HiSeq 2500 machine. Raw data deposited at the Sequence Read Archive (SRA) accession number SRP095882. The PTCH1-MB was included in our previous study (SRS1888277) (20) while SUFU-MB is newly deposited (SRS3694085). RNA-seq data analysis. Raw reads were processed and analyzed as previously described (20). To obtain dispersion estimates for a count dataset, we used the 'estimateDispersions' function in the DESeq R package (21). Since we had no replicates, we defined the argument method="blind", which ignored the sample labels and computed the empirical dispersion value of the gene as if the two samples were replicates of a single condition. The argument sharing Mode was defined as "fit-only". Genes with FDR corrected p<0.05 were noted as displaying different expression levels. MicroRNA-seq data analysis. Raw reads were processed and analyzed as previously described (20). Unless specified otherwise, a p-value of 0.05 was used as the significance cutoff. Ingenuity pathway analysis (IPA). Genes with FDR<0.05 and miRs with p<0.05 were uploaded to QIAGEN's Ingenuity ® Pathway Analysis (IPA ® , QIAGEN Inc., https://www.qiagenbioinformatics.com/ products/ingenuity-pathway-analysis) software (22). The IPA was used to gain insights into the overall biological changes introduced by the expression, miR target gene prediction, and miR and gene Integrated Analysis. Using the Ingenuity Pathways Knowledge Base, each gene was linked to specific functions, pathways, and diseases. Analysis of an independent microarray data. The dataset GSE85217 (23) downloaded from the Gene Expression Omnibus (GEO) database (24), comprises 763 samples of which 223 are SHH. There is no information regarding germline mutations in the database, hence we had to chose the samples with the highest probability of representing GS patients. The likelihood of developing MB in patients with GS is higher in younger children (2), and as we were interested in SHH-MBs that are as similar as possible to those examined in our study, we first selected tumors from children under three years of age. Then we selected only those with a deletion in 10q, which includes the SUFU gene (n=3), and those with a deletion in 9q, which contains the PTCH1 gene (n=13). In this way we can compare tumors with similar mutations to ours, even if this is just in the tumor and not in the germline. We employed a moderated t-test, conducted using the limma (25) R package (version 3.38.3). Deletions, histology, and age were included as covariates in the linear model, and an FDRcorrected p-value of 0.05 was used as the significance cutoff. The cells were transfected with siRNA at a final concentration of 30 nM per siRNA (SUFU siRNA, PTCH1 siRNA, or Scramble siRNA for control) by using the Avalanche ® Everyday Transfection Reagent (EZT-EVDY-1), according to the manufacturer's protocol. Briefly, the cells were passaged one day before transfection to reach a confluency of 60-70%. The next day, the selected siRNA was incubated in a serum-free medium with the recommended volume of transfection reagent for 20 min at room temperature. The transfection mixture was gently added to the prepared cell culture plate(s) for continued incubation at 37˚C for 24-36 h, until harvesting and RNA extraction. Reverse transcription (RT) and quantitative PCR (qPCR). Total RNA was isolated from Daoy cells by using the NucleoZol homogenizing reagent (Machery-Nagel 740404.200), according to the manufacturer's protocol. Purified RNA samples were reversetranscribed using the GoScript Reverse Transcription System (Promega, Madison, WI, USA, A5000) according to the manufacturer's protocol. The cDNA product was diluted 1:5 and mixed with SYBR Green PCR Master Mix (ThermoFisher Scientific) for amplification on an AriaMX thermal cycler (Agilent Technologies, Santa Clara, CA, USA) using the gene-specific primer sets described below. Each qPCR reaction had a total volume of 12 μl. Three biological replicates were performed, and all reactions were run in triplicates. The comparative Ct method was used to analyze mRNA levels, using actin as the normalization control. Results Patients. Primary tumor samples were obtained from two children diagnosed with non-metastatic, desmoplastic MB. Both tumors were identified as belonging to the SHH subgroup by using nanoString nCounter Technology, as previously described (26). Patient 1: A boy, the first child of Yemenite origin parents who are not relatives, with no family history of cancer. Neonatal follow-up showed large head circumference with continued growth on the 98% percentile. There was some delay in motor and speech development. The boy was noted to have torticollis, unilateral dilation of the renal pelvis, and trivial pulmonary stenosis. At age 22 months, he presented a two-week history of recurrent falls followed by vomiting and apathy. MRI at diagnosis ( Figure 1B and C) showed a large heterogeneous mass in the cerebellar vermis with obstructive hydrocephalus. No metastatic spread to the craniospinal axis was evident. The patient underwent gross total removal of the tumor, and pathology showed desmoplastic medulloblastoma ( Figure 1F). Genetic testing was performed due to macrocephaly and showed a de novo germline mutation in PTCH1 NM_000264.5(PTCH1):c.379G>T (p.Glu127*) ( Figure 1A). The boy was treated according to the COG 99703 protocol without irradiation. He is currently 10 years old, with no evidence of relapse. He developed keratogenic jaw cysts at the age of 5 years and has mild learning difficulties. He has palmar pits and multiple melanocytic nevi. Patient 2: A girl, the fourth child of unrelated parents of Iraqi-Moroccan/Yemenite origin, with no family history of malignancy. At age 3 months, the local well bay clinic noticed increasing head circumference, new-onset strabismus, and lethargy. MRI ( Figure 1D and E) showed extreme hydrocephalus and a multicystic mass in the posterior fossa (PF). There were no metastases visible in the brain or spine. She underwent partial resection of the mass, leaving a supratentorial residue. The pathology result was MB with extensive nodularity (MBEN) ( Figure 1G). The tumor was positive for both GAB1 and YAP in more than 80% of cells. Due to her extremely young age, she underwent genetic testing, which showed a de novo heterozygous loss of exon 3 in the SUFU gene at the DNA level. She was treated according to the ACNS 1221 protocol (before suspension of enrolment) without intrathecal chemotherapy. She is now 6 years and four months old, with no evidence of relapse. She shows some residual mild ataxia and dysmetria and attends a regular kindergarten. She has no other physical findings of GS. Expression profiling of genes. Comparison of SUFU-MB against PTCH1-MB detected 111 genes displaying different expression levels. Of these 23 were up-regulated, and 88 were down-regulated in SUFU-MB, compared to PTCH1-MB ( Figure 2A). The up-regulated cluster was associated with extracellular matrix organization and cell adhesion, while the down-regulated cluster was associated with complement activation (classical pathway) and immune system processes. Of the MB-related genes, we detected a significantly lower expression of OTX2 in the SUFU-mutated tumor. OTX2 plays an essential role in normal cerebellar development (27) (28), including MB (29). Indeed, lower OTX2 expression is one of the biomarkers used to differentiate SHH-MB from other MB subgroups in a real-time PCR assay panel (30). In MB, the effect of overexpressed OTX2 as an oncogene is predominantly observed in Group 3/4, and participates in tumor localization and migration (28). In contrast, in SHH tumors, overexpression of OTX2 inhibits tumor progression (27). The lower expression of OTX2 may contribute to the increased risk of local relapse and, therefore, to poorer event-free survival which has been found in SUFU-mutated patients (10). Recently, mTORC1 signaling was detected as a downstream effector of OTX2 in Group 3 MB (31). A relationship between mTOR pathway and SHH was also demonstrated for SHH MB (32,33). However, the mTOR signaling molecules did not show differential expression between the two patients, demonstrating that mTOR probably does not have a role in regulating the pathophysiology differences of the two SHH tumors. We found a decreased expression of matrix metallopeptidase 11 (MMP11) in SUFU-MB, compared to PTCH1-MB. MMPs are endopeptidases, responsible for the degradation of extracellular matrix components (34) and play a significant role in cancer (35). Increased expression of some MMPs, including MMP11, are correlated with the tumor WHO-grading classification of human malignant gliomas (36). It will be of interest to test whether the expression of MMPs contributes to the increased probability of local relapse in SHH tumors. Immune signaling pathways dominated the list of genes whose expression differed between the tumors from the two patients (Table I). Over-represented diseases and biological functions included skeletal and muscular disorders, cell death and survival, embryonic development, and nervous system development and function (Table II). Top associated network functions included cancer-related functions, such as cellular development, DNA replication, recombination, and repair (Table III). Genes belonging to these networks include OTX2 and MYC, which are part of the expression panel used to differentiate MB subgroups (30). The genes GSTM1 and HLA-G, which belong to network number 2 (Table III), are known to be associated with BCC. Genetic variants in GSTM1 might contribute to the variation in the number of BCC jaw cysts and its presentational phenotypes in patients with PTCH-mutated as opposed to SUFU-mutated GS (37), while the expression of the HLA-G gene is high in BCC and decreases following radiotherapy (38). Expression profiling of microRNAs. Overall, 778 miRs were expressed in both tumors, of which the expression level of 11 miRs was different between SUFU-MB and PTCH1-MB, three miRs displayed higher expression and eight lower expression levels (Table IV). IPA analysis of differentially expressed miRs identified cancer-related functions, such as cell morphology, cellular development, and cellular growth and proliferation as top diseases and biological functions (Table V). Top network functions included RNA post-transcriptional modification and cancer, cardiovascular diseases, and connective tissue disorders (Table VI). Among the miRs that showed a lower expression in SUFU-MB were miR-301a-3p and miR-181c-5p. These miRs are related to early stages of solid tumor processes and the processing of siRNA networks, and they regulate, either directly or indirectly, the expression of DICER1 ( Figure 2B). Figure 2C). We found that miR-301a-3p demonstrates lower expression levels in SUFU-MB, compared to PTCH1-MB. miR-301a-3p acts as an oncomiR (39,40), as it down-regulates the expression of the SMAD4 gene. Inhibiting miR-301a-3p reversed gemcitabine treatment resistance in pancreatic cancer cells in vitro by regulating the expression of PTEN (41). The role of miR-301a-3p in MB is yet unclear, and it may have different effects in different tissues. One of the target genes of miR-301a-3p is GABRA4, which is downregulated in MB (42). We found that the expression of GABRA4 was higher in SUFU-MB, possibly resulting from the lower expression of its regulator, miR-301a-3p. GABRA4 is a part of the GABA receptor signaling pathway and, considering our findings; it may not be downregulated in all MBs as previously thought. The specific role of GABRA4 in MB tumorigenesis, in general, or in SUFU-mutated MB is yet to be determined. CANCER GENOMICS & PROTEOMICS The expression level of miR-7-5p was higher in SUFU-MB than in PTCH1-MB and, correspondingly, the expression of its target gene, Connexin 30 (GJB6), was lower. Connexins play a role in the gap-junction signaling pathway, and they function as tumor suppressors (43). The expression of Connexin 30 in human glioblastoma cells was found to reduce their growth in vitro, but, at the same time, it made them resistant to the effects of radiation therapy (44). Increasing the levels of Connexin 30 in SUFU tumors may serve as a therapeutic option to decrease cell proliferation, while resistance to radiation therapy will be irrelevant in these young patients, whose up-front treatment is planned to be radiation-free. MiR-379-5p showed higher expression in the SUFU-MB, compared to the PTCH1-MB, while its targeted gene, FOXL2, demonstrated lower expression levels. FOXL2 is a transcription factor involved in congenital disorders (45). It directly modulates the expression of the estrogen receptor 2 (ESR2) (46). A recent study found that 17β-estradiol, via ESR2, exerts chemoprotective effects in some MB cell lines (47). In the SHH pathway, SUFU and GLI interact and bind to PIAS1 (48), which activates estrogen receptors, including ESR2 (49). It may be instrumental to try and decrease the expression of miR-379-5p in SUFU-mutated tumors, which would increase the expression of FOXL2 and, therefore, the expression of ESR2, which may have chemoprotective effects. Supporting evidence from an independent cohort. Since the current study reports only two patients, it needs to be repeated in a larger cohort. Larger independent SHH cohorts of tumors with known germline mutations are not publicly available, but we were able to detect an independent dataset of SHH tumors. Although this independent dataset does not show which patients have a germline mutation (i.e., Gorlin's syndrome), it does include important phenotypic data, such as age and chromosomal deletions. We know from the literature that the majority of SHH MBs occur in infants younger than 3 years (8). We chose to analyze data from these infants with tumors who carry a deletion in 10q or 9q (which include SUFU and PTCH1, respectively), adding significant support to our results. We assumed that the PTCH1-and SUFU-loss tumors will have a genetic expression similar to that in our patients. Of the 111 differentially expressed genes that we detected in our patients, 54 were also included in the microarray used in the GSE85217 dataset (available upon request). Of these, the expression of six genes (11%) was significantly different between PTCH1-and SUFU-loss tumors. These six genes are upregulated in the SUFU-loss tumors, which is in agreement with their expression in our SUFU-MB patient. Three of the six genes (GABRA4, NPNT, and PCP4L1) are targets that demonstrated an inverse expression with miRs in our patients ( Figure 3). In vitro validation. To validate our findings, we utilized short interfering RNA (siRNA) to knockdown SUFU or PTCH1 expression on a Daoy MB cell line, and we tested the effect of the expression of selected genes that were differentially expressed between PTCH1-and SUFU-loss tumors. To test knockdown efficiency, we quantified the PTCH1 and SUFU expression in transfected cells. The expression of PTCH1 and SUFU was reduced by 60% and 80%, respectively, in siRNA transfected cells ( Figure 4A). Five genes were selected and their expression following PTCH1 and SUFU downregulation was tested. The direction of change observed in the in vitro model was consistent with their change expression as observed in the PTCH1-and SUFU-loss tumors. the detection of molecular differences between two SHH-MB tumors having different germline mutations, this study contributes to our understanding of the biology of heritable MB and suggested potential drug targetable pathways. Treatment of MB usually involves chemotherapy and craniospinal irradiation with severe long-term effects on memory and cognition, growth and development, hearing and the risk of secondary malignancies. Much research is devoted to finding more successful and less damaging treatments for this disease. There are four subgroups of MB but the commonest in infancy is the SHH subgroup (50) of which approximately 20% will have germline mutations in PTCH1 or SUFU (Gorlin syndrome) (9). Children with GS should not receive radiation and general protocols omit/ delay radiation for infants until they reach three years of age. Alternative treatments for infant with MB and in particular in the context of GS are desperately needed. Children with GS-SUFU differ phenotypically from those with GS-PTCH1 and even their MB differ with a poorer prognosis noted in the former and a higher rate of secondary malignancies (10). Conclusion Herein we explored the differences between the tumors from two infants both with SHH-MB that bear different germline mutations causing GS, and we used an unbiased whole-transcriptome sequencing to identify previously undetected potential therapeutic targets. Often the study of a rare genetic disease can have implications for research and treatment of a wider cohort of patients such as SHH-MB as a whole. In the same way that targeted therapy has been developed for PTCH1-mutated tumors (although, at the moment, relevant only for skeletally mature patients), we hope that a suitable target will be found for those with downstream mutations, such as SUFU and GLI and for infant with MB in general. This report may stimulate interest among the MB community and hopefully result in international collaborations for further delineating the unique features of different groups within MB and the SHH group in particular. Conflicts of Interest The Authors declare no conflicts of interest.
2021-04-23T21:41:37.407Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "cbef993c171ee164e771c0e3471b7dca982e490f", "oa_license": null, "oa_url": "https://cgp.iiarjournals.org/content/cgp/18/3/335.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "cbef993c171ee164e771c0e3471b7dca982e490f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236212987
pes2o/s2orc
v3-fos-license
A Cross-Cultural Study of Distress during COVID-19 Pandemic: Some Protective and Risk Factors Previous studies on the impact of the COVID-19 pandemic on mental health in different countries found an increase in anxiety, stress, and an exacerbation of previous mental health problems. This research investigated some of the protective and risk factors of distress during the COVID-19 pandemic, among which were the perception of receiving social support from family members and friends, and a chronic tendency to worry. The study was conducted in three European countries: Italy, Serbia, and Romania. A total of 1100 participants (Italy n = 491; Serbia n = 297; Romania n = 312) responded to a questionnaire. Results from this study show that distress during the COVID-19 pandemic is higher for people who are chronic worriers and those who have higher levels of fear of COVID-19. More specifically, it is confirmed that a chronic tendency to worry exacerbates the relationship between fear and distress: it is stronger for people who have a greater tendency to worry. Introduction Following the epidemic in Wuhan, coronavirus (COVID-19) soon spread around the world, and thus on 30 January 2020, the World Health Organization declared a Public Health Emergency of International Concern, and on 11 March, officially declared a pandemic. As of March 2021, the COVID-19 pandemic has affected more than 110 million people worldwide, caused more than 2.5 million deaths, and there are approximately 25 million active cases [1]. The disease is mild or asymptomatic in most people; in some (usually the elderly and those with comorbidities), it may progress to pneumonia, acute respiratory distress syndrome (ARDS), and multi-organ dysfunction. The fatality rate is estimated to range from 2% to 3%. The COVID-19 pandemic and lockdowns have caused widespread concern and fear [2][3][4][5][6]. People are afraid of becoming sick and dying, and are concerned about family members. Several studies in different countries have found a significant relationship between fear of COVID-19 and stress, anxiety, and even depression [2,4,[6][7][8][9][10][11]. However, the literature suggests that the potential negative psychological effects of COVID-19 may vary within the population according to some individual and contextual factors [12][13][14]. Some factors may exacerbate the negative effects, whereas others may have a protective role. Fear of COVID-19 is heightened by information about infection rates, overcrowded hospitals, deaths, and other negative information about the pandemic which are perceived as a risk [15]. The pandemic statistics are available from many types of mass media, but this information changes every day, and it is not always easy to follow and to remember the numbers. In addition, there is evidence that people may suffer from coronavirus news fatigue or apathy, and consequently, they pay less attention to the information about the pandemic [16]. This may explain the fact that, after more than one year of the pandemic, many people still downplay the risks of COVID-19, scoffing at mask-wearing and social distancing. Regardless of these difficulties, people still try to get an idea about the current situation, and they often make estimations about the numbers of positive cases and about how widespread the virus is. These estimates are based on the information they have captured somewhere and on some heuristics. The current study has several aims: (1) to explore whether there is a significant relationship between the knowledge on statistical data and the perception of how widespread COVID-19 is with fear and psychological stress; and (2) to explore if the relationship between the perception of how widespread COVID-19 is in a residential area and fear of COVID-19, on one side, and distress, on the other side, is moderated by some individual factors. This research takes into consideration several risk and protective factors, and in particular: (1) a chronic tendency to worry; (2) the perception of the possibility of receiving social support from family and friends; (3) the perception of household climate; (4) an individual's financial situation; and (5) some socio-demographic characteristics (such as age, gender, level of education, and marital status). We expect that the participants who estimate that there are many positive cases in their residential area will have higher levels of fear and distress. In addition, we expect that the relationship between the perception of how widespread COVID-19 is in the residential area and fear of COVID-19, on one side, and distress, on the other side, may be exacerbated by a chronic tendency to worry [12,17] and negative financial and household situations. Moreover, this relationship may be mitigated by the perception of the possibility of receiving social support [18,19]. Risk and Protective Factors during COVID-19 The way people respond to stress during the COVID-19 pandemic may depend on several factors, such as socio-demographic characteristics, personality traits, and contextual factors. Several studies have confirmed the crucial role that social support plays in buffering the negative impact of COVID-19 on mental health [18,19]. Given the fact that, during pandemic, people are obliged to spend most of their time at home, the context in which they live is of crucial importance for their well-being. This study explores the role of perceived social support from family and friends in predicting psychological well-being vs. stress. Normally when people feel distressed, sad, or anxious, they turn to others for social support. Social support is provided by networks that may consist of, for example, family, relatives, friends, neighbors, or coworkers [20]. Several studies have reported that providing and receiving social support is a crucial resource that is associated with a greater resilience to stress [18][19][20][21][22][23], whereas a lack of social support can contribute to distress [24]. We hypothesize that perceived social support from family members and friends may be an important resource in coping with difficulties during this pandemic. In addition, we expect that a tendency to worry could have a significant impact on distress. Most people worry about the COVID-19 situation and feel that they do not have control over it. Worry has been defined as a chain of thoughts and images, and an anxious apprehension, which are negatively affect-laden and relatively uncontrollable [10,17,25]. Almost everyone worries occasionally, but many people worry every day [26]. The feeling of not being able to control one's worrying is probably the key to distinguishing between "pathological" and "normal" worrying. Pathological or chronic worry is commonly assessed using the Penn State Worry Questionnaire (PSWQ; [27]), a 16-item inventory that assesses generality, excessiveness, and the uncontrollability aspects of worry [28]. We expect that chronic worry will exacerbate the perception of how widespread COVID-19 is and fear of COVID-19, making people more susceptible to stress. High risk areas of infection or the subjective perception of risk is related to fear, especially in those who worry about other problems, and as such, are more vulnerable. Furthermore, we expect some socio-demographic factors to have a significant impact, such as gender, age, education level, and marital status. It has been largely demonstrated that women have higher levels of fear of COVID-19, anxiety, stress, and depression than men [3,5,8]. Although the pandemic may affect all age groups, the most vulnerable are children and adolescents, especially those living in an unhealthy family environment [18,29,30] and older people, especially those with health problems [31][32][33]. Furthermore, we expect that the relationship between the perception of how widespread COVID-19 is in the residential area, fear of COVID-19, and stress may be moderated by the perception of an individual's financial situation and a problematic household climate. Previous studies have found that people with financial difficulties during the COVID-19 pandemic have poorer mental health [30,34]. Many people lost jobs and were weakened financially which created anxiety and stress for them and their families. Moreover, since the pandemic started, a high percentage of people have been asked to work or to attend classes from home, and to limit their social relations with other people to the minimum possible. In relation to this, several studies have reported on negative changes in the mental health of parents and children, and on disruption to the quality of their interactions. For example, an increased frequency of domestic violence and of shouting and physical punishment of children has been registered during the pandemic [35][36][37]. Stress may also be caused by distance from family members, especially when they are severely ill or dying of the virus [38]. Lastly, we also considered trust in governmental institutions as a possible moderator between the perception of how widespread COVID-19 is in a residential area, fear of COVID-19, and stress. Trust in institutions may refer to different aspects [39]. However, some scholars suggest that, despite this complexity, trust judgments may be considered as one-dimensional as different types of judgment combine into one generalized assessment [39][40][41]. Research confirmed that trust in a government's good intentions and capacity to act well foster a willing compliance with regulations to limit the negative effects of the pandemic [42][43][44][45]. We expect that the individuals who trust the institutions will be less distressed during the COVID-19 pandemic than those who do not trust the institutions. The current study explores the effects of the above-mentioned factors through a comparative perspective in three European countries: Italy, Romania, and Serbia. COVID-19 in Italy, Serbia, and Romania Italy was the first European country to be severely affected by COVID-19. The virus was first confirmed on 31 January 2020, and it spread quickly throughout the country. On 9 March 2020, the Italian government declared a state of emergency and introduced a lockdown until 11 May 2020 [46]. During that period, Italy registered over 28,884 deaths due to COVID-19, and the number of positive cases was one of the highest in the world [46]. Prior to March 2021, Italy had about three million confirmed cases and almost 100,000 deaths [46]. In the neighboring Balkans, COVID-19 arrived a couple of weeks later, and in Romania, it was confirmed on 26 February 2020. On 21 February, the Romanian government announced a 14-day quarantine for citizens returning from the affected countries (Italy and China in that period). The state of emergency in Romania was issued on 16 March for a period of 30 days [47]. Up until March 2021, the country recorded almost 800,000 confirmed cases and 20,000 deaths in the population [47]. In Serbia, the first positive case was reported on 6 March 2020. The government declared a national state of emergency on 15 March, and adopted containment measures. These included closing borders, prohibiting the movement of citizens during the weekends and between 17:00 and 05:00 on weekdays (and a total ban for senior citizens), suspension of public transport and all activities in parks and public areas, and the closure of commercial activities (except grocery stores and pharmacies). On 6 May, the state of emergency and lockdown were lifted. In response to an increasing number of cases, a state of emergency was declared again on 3 July in several municipalities including the capital, Belgrade. New containment measures were implemented, including restrictions on outdoor and indoor gatherings and the mandatory use of masks in public indoor spaces, which were mostly, but not fully, respected [48]. As of March 2021, almost 450,000 confirmed cases were recorded and more than 4000 deaths in a population of approximately seven million people [49][50][51]. Participants This study involves the participants from the three European countries: Italy, Serbia, and Romania. The study conducted in Italy included 491 participants of Italian nationality (n = 355 female, 72.7%). From the power analysis we performed (Gpower 3) [52], considering 0.05 as a threshold probability to reject the null hypothesis, and the expected correlations (r = 0.15), this sample size overcame 95% of power, which would require a sample size of 166. The age range was 18-68 years (M = 29.44, SD = 14.07). The majority of the participants (70.9%) had completed high school, 9.8% had an undergraduate degree, 9.2% had a graduate degree, 3.9% had a post-graduate degree, and 6.3% had completed primary school. The majority (71.1%) of the sample were single, 24.8% were either married or in a relationship, while the remaining were widowed or divorced. Most of the participants (56.4%) were students. The study conducted in Serbia in the period 20-28 May 2020 involved 297 participants (n = 226 female) and the age range was 18-66 years (M = 29.29; SD = 14.27). The majority of the participants (44.8%) had completed high school, 43.1% had a graduate degree, 10.4% had a post-graduate degree, and 1.7% had completed primary school. The majority (68.7%) of the sample were single, 23.2% were either married or in a relationship, while the remaining were widowed or divorced. Most of the participants (64.0%) were students, and 24.6% were employed. The study conducted in Romania involved 312 participants (n= 255 female) and the age range was18-69 years (M = 31.74; SD = 10.71). About 20.2% of the participants had completed high school, 44.2% had a graduate degree, and 35.6% had a post-graduate degree. The majority (58%) of the sample were single, 32.7% were either married or ina relationship, while the remaining were widowed or divorced. Most of the participants (62.8%) were employed, and about 25% were students. Procedure Data were collected between 20 May and 20 June 2020. This was immediately after the end of lockdown in Italy (18 May 2020). Recruitment was via social media (Facebook) and through students who invited their friends and relatives to participate in the study. The survey was presented as research designed to investigate the psychological impacts of the COVID-19 pandemic. The survey took approximately 15-20 min to complete and it was uploaded on Google Forms (https://forms.gle/oZJzQtMPCaf6gd837 (assessed on 20 June 2020) in Italy, https://forms.gle/PXvWu61DrbyfF17fA (assessed on 20 June 2020) in Serbia, and https://forms.gle/K9S5Ak9xS66995hKA (assessed on 20 June 2020) in Romania). The response rate was 98% in Italy, and 100% in Serbia and in Romania. The study was approved by the Ethics Committee of the Department of Social and Developmental Psychology, Sapienza-University of Rome (Prot. 468-4 May 2020). Measures In the questionnaire, the following groups of measures were used. Demographics: the participants indicated their age, gender, level of education, marital status (single vs. married/in a relationship), and country and city of residence. Estimation of the level of spread of COVID-19 in the district: the participants were asked to estimate how many people had coronavirus in their district on a five-point scale (1 = no one; 5 = a large number of people). Participants also indicated whether they had the coronavirus infection (no-not sure/yes), whether any family member had coronavirus (no-not sure/yes), and whether any friends/ acquaintances had coronavirus (no-not sure/yes). At the time of data collection, a relatively small number of participants responded positively on these items, therefore we did not include these variables into the statistical analyses. Moreover, we asked the participants if they knew how many people had contracted coronavirus in their country (approximately), how many people were infected on that day, how many people had died since the beginning of the COVID-19 emergency, and how many people had been infected with coronavirus in their place of residence. We found that a high percentage of the participants did not respond to these questions, or some of them submitted distorted data, thus we did not include any of these items into the statistical analyses. Economic situation: next, we asked the participants to compare their economic situation with the situation before the COVID-19 lockdown on the five-point scale (1 = much worse; 2 = slightly worse; 3 = more or less the same; 4 = improved slightly; and 5 = much better) and if they were concerned about their economic situation (1 = not at all concerned; 2 = slightly concerned; 3 = somewhat concerned; 4 = moderately concerned; and 5 = extremely concerned). An index of economic difficulties were calculated by summing the responses on the last two items after having reversed the responses on the first item. A higher index indicates greater concern about economic difficulties. Household climate: the participants rated on a five-point scale (1 = never to 5 = very often) if they were experiencing the following problems in their household during the COVID-19 lockdown: (a) little interaction; (b) sharp discussions and fighting; (c) a lack of respect. An index was created with the higher scores representing a more negative household climate. In addition, we asked the participants if they had been far away from family (partner/children/parents) during the lockdown. Penn State Worry Questionnaire (PSWQ) [28] the PSWQ is a well-known measure that is free of worry content; that is, it asks about the tendency to worry without identifying the possible targets or contents of those worries. In this research, we used a shortened version consisting of nine items which are rated on a five-point Likert scale (1 = not at all typical of me; 5 = very typical of me). Eight items were worded to indicate pathological worry, with higher numbers indicating more worry (e.g., "Once I start worrying, I cannot stop"), while the remaining item was worded to indicate that worry is not a problem, with higher numbers indicating less worry (e.g., "I never worry about anything"). That item was reversed and a total score was calculated by summing the averaged responses on all items. Higher PSWQ scores reflected greater levels of tendency to worry. The PSWQ demonstrated high reliability in all three countries (Cronbach's α was 0.90 in Italy; 0.91 in Serbia, and 0.93 in Romania). The social support scale (four items): we asked the participants to rate how confident they were that they would receive emotional support from family members (parents, partner, and children) and from friends and relatives on a Likert scale of five points (1-not at all sure; 5-completely sure). We created two indices of social support: (1) social support from family members; and (2) social support from relatives and friends. COVID-19 fear scale: we designed a scale composed of five items (I am afraid that I might get coronavirus; I am afraid that I may end up in intensive care because of COVID-19; I am afraid that I might die if I get the coronavirus infection; I am afraid that a loved one might get the coronavirus infection; and I am afraid that someone in my family might end up in hospital because of COVID-19). The participants were asked to rate their level of concern about coronavirus on a five-point scale (1 = not at all; 5 = very much). We ran principle component analysis in order to evaluate the factor structure of the scale and Kaiser's criterion of 1, and a scree plot was used to select the number of factors. The analysis revealed a mono-factorial structure that explained 69.33% of the variance when considering data of the three samples together (Table S1 in the Supplements). An index was created, with higher scores reflecting higher levels of fear of COVID-19 (Cronbach's α was 0.90 in Italy, and 0.89 in Serbia and in Romania). Recently, several scales measuring the fear of coronavirus have been proposed in the literature [7,53,54] but were not yet available at the time of this study. The scale of trust in governmental institutions: we asked the participants to evaluate their level of agreement with three items on a five-point scale (1 = completely disagree; 5 = completely agree). The items were: I believe the government is taking good measures for the prevention and containment of the virus; I trust in the government's advice on how to prevent the spread of coronavirus; and I think our health system has provided adequate care during the COVID-19 emergency. An index of trust was calculated by summing the averaged responses at these three items. Higher scores reflected higher levels of trust in institutions. The measure had acceptable reliability in all samples (Cronbach's α was 0.73 in Italy, 0.80 in Serbia, and 0.77 in Romania). The scale of distress: this contained six negative emotional states (sad, frightened, concerned, anxious, distressed, and tense). The participants were asked to rate how they had been feeling lately on a five-point scale (1 = never; 5 = always/usually). The exploratory factor analysis produced a single dimension that explained 61.29% of the variance. An index of distress was calculated and higher scores indicated higher levels of distress (Cronbach's α was 0.87 in Italy and in Serbia, and 0.88 in Romania). Results As a first step, an overall confirmatory factor analysis (CFA) was conducted before proceeding with testing the multigroup measurement invariance. This procedure allows researchers to examine whether respondents from different groups interpret the same measure in a conceptually similar way [55][56][57]. The estimated model consisted of six correlated latent factors (tendency to worry, fear of COVID-19, stress, family climate, family support, and friends support). We adopted the partially disaggregated parcels method, which is achieved by randomly aggregating items that load on the same factor so that there are two or three combined indicators instead of several single-item indicators. The parcels have been shown to have different advantages: they have a higher reliability than single items [58], allow a better fit through the reduction of the variables involved in the model [59], and they may also ensure a more normally multivariate distribution [60]. More specifically, for the tendency to worry, fear of COVID-19, and stress scales, we used two and three indicators, respectively. The model fit indices are satisfactory when running the CFA model across all three countries (χ 2 (67) = 624.416, p < 0.001, CFI = 0.915, RMSEA = 0.087). Moreover, to assess measurement invariance, we ran a multigroup confirmatory factor analysis (MG-CFA) starting from a less restrictive model (i.e., configural) towards more restrictive ones (i.e., metric and scalar) [61,62]. To be specific, configural invariance examines whether items load onto the same latent factor across groups. This model is critically important because one can proceed to testing all subsequent invariance models in the hierarchical sequence only if the configural invariance is achieved. Once configural invariance holds, metric invariance should be tested to warrant that the different groups respond to the items in the same way. Metric invariance means that the factor loading of each item on the latent factor is the same across groups. Satisfying metric invariance demonstrates that the unit and the interval of the latent factor are equal across groups [63]. Thus, it allows the comparison of factor variances and structural relations (e.g., correlations between variables) across groups [64]. Furthermore, when metric invariance is met, scalar invariance is required to assess whether the intercept of each item is the same across groups in addition to the equality of factor loadings [65]. Descriptive statistics for all variables are shown in Table 2. From the means in Table 2, we can see that the level of fear from COVID-19 is significantly, but not drastically, different in the three countries, being the lowest in Serbia and the highest in Romania. It could be associated with the fact that, in that period, the percentage of spread of COVID-19 was lower in Serbia than in Italy and Romania. In effect, we can see that the respondents in Serbia perceived the lowest level of spread of COVID-19 in their district. Instead, the levels of stress and tendency to worry were lower in Romania in comparison to Serbia and Italy. At the same time, the perception of social support from family was higher in Romania than in the other two countries, whereas the perception of social support from friends was higher in Italy than in Romania and Serbia. The analysis of correlations between the examined variables (Table 3) indicates that, in all three samples, distress was correlated to the highest degree with the tendency to worry, and then with the fear of COVID-19. In addition, we found that distress, tendency to worry, and fear of COVID-19werecorrelated significantly with gender in Italy and Romania, but not in Serbia. In Italy and Romania, male participants had lower levels of distress, fear, and worry than female participants. Furthermore, distress was slightly and negatively correlated with age in all three samples, and fear with level of education only in the Serbian sample. Thus, the level of distress was higher in younger people, and in Serbia, also in those with a lower education. Predicting Distress during COVID-19 Pandemic We conducted a multiple regression analysis using SPSS to examine the percentage of variance in distress accounted for by each of our predictor variables. We considered as predictors some socio-demographic variables (gender, age, level of education, and civic status), index of financial difficulties, the perception of spread of COVID-19 in the place of residence, fear of COVID-19, the perception of social support from family and friends, the perception of household climate, having been distant from family, trust in government institutions, and the tendency to worry (see Table 4). All the variables were standardized before entering the analysis. Furthermore, we considered double interactions between the perception of the spread of COVID-19 in the place of residence with each of these variables: fear of COVID-19, tendency to worry, trust in government, and social support from family and friends. Finally, we included also double interactions between fear of COVID-19 with the variables: tendency to worry, trust in government, and social support from family and friends. Results in the Italian sample showed that the regression model accounted for a high percentage of variance (51%) (F(22,468) = 22.19, p < 0.001). Among the socio-demographic variables, we found a significant effect only of gender (β = 0.09, t = 2.55, p < 0.01), indicating that females have higher level of distress. Subsequently, we found a significant effect of financial difficulties (β = 0.10, t = 2.97, p < 0.002), meaning that those participants who have economic problems are more distressed. The analysis confirmed that fear ofCOVID-19 is a strong predictor of distress (β = 0.20, t = 5.18, p < 0.001). The tendency to worry is another very strong predictor of distress (β = 0.52, t = 13.55, p < 0.001). We found an effect of interaction between fear of COVID-19 and social support from family (β = −0.10, t = −2.72, p < 0.01), and an effect of interaction between fear of COVID-19 and tendency to worry (β = 0.08, t = 2.26, p < 0.02). We calculated a test to check the multicollinearity, the variance inflation factor (VIF) value and the tolerance statistic. The largest VIF (2.20) was for age, but it was not greater than 10, so it was within tolerance. The corresponding tolerance statistic for age (0.45), was not below 0.1, and again this was within tolerance. Thus, we concluded that multicollinearity did not exist. From the simple slope analyses, [68] it emerged that the relationship between fear of COVID-19 and distress was stronger for the participants who had a greater tendency to worry (β = 0.27, t = 5.69, p < 0.001) than for those who had lower tendency to worry (β = 0.15, t = 2.98, p < 0.003). In addition, this relationship was stronger when people had less support from family (β = 0.56, t = 10.29, p < 0.001) than when they had more support (β = 0.36, t = 6.27, p < 0.001). In the Serbian sample, we considered the same variables as in the previous analysis. The regression model accounted for 52.5% of variance of distress (F(22,274) = 13.75, p < 0.001) with significant positive effects of financial difficulties (β = 0.15, t = 3.27, p < 0.001), family climate (β = 0.11, t = 2.25, p < 0.03), fear of COVID-19 (β = 0.16, t = 3.42, p < 0.001), tendency to worry (β = 0.48, t = 10.08, p < 0.001), and a negative effect of social support from family (β = −0.10, t = −1.90, p < 0.05). More interestingly, we found an effect of interaction between fear of COVID-19 and tendency to worry (β = 0.12, t = 2.83, p < 0.005). Here also, we checked the VIF value and the tolerance statistic for multicollinearity. The largest VIF (2.42) was for age, but it was not greater than 10, so it was within tolerance. The corresponding tolerance statistic for age (0.41), was not below 0.1, and again, this was within tolerance. Thus, we concluded that multicollinearity did not exist in this analysis either. In order to better understand the interaction, we conducted a simple slope analysis. We found that the relationship between fear of COVID-19 and stress was stronger and significant only when people had a high tendency to worry (β = 0.35, t = 5.46, p < 0.005), whereas it was not significant when people had low tendency to worry (β = 0.01, t = 0.17, p = n.s.). Finally, the results on Romanian sample showed that the regression model accounted for 67% of variance of distress (F(22,274) = 25.48, p < 0.001). The analysis confirmed once again the significant effect of fear ofCOVID-19 on distress (β = 0.35, t = 7.88, p < 0.001) and of tendency to worry (β = 0.59, t = 14.11, p < 0.001). We also found a negative effect of social support from family (β = −0.12, t = −2.81, p < 0.005). Lastly, we confirmed a significant effect of interaction between fear of COVID-19 and tendency to worry (β = 0.08, t = 2.19, p < 0.03). From the test of multicollinearity, we found the largest VIF (1.78) was for social support from family, but it was within tolerance. The corresponding tolerance statistic for this variable (0.56) was not below 0.1, and again this was within tolerance. From the simple slope analyses, it emerged that the relationship between fear of COVID-19 and distress was stronger for the participants who had a higher tendency to worry (β = 0.37, t = 6.30, p < 0.001) than for those who had a lower tendency to worry (β = 0.24, t = 5.13, p < 0.001). Discussion This study aimed to explore the role of some protective and risk factors in predicting distress during the COVID-19 pandemic. We considered the perception of how widespread COVID-19 was in the place of residence, fear ofCOVID-19, the chronic tendency to worry, the perception of the opportunity to receive social support from family members and friends, the perception of family climate, the perception of economic problems, and some socio-demographic characteristics (age, gender, level of education, and marital status). This research was conducted in Italy, Serbia, and Romania in the period immediately after the lockdown. We explored knowledge of the statistics about COVID-19 cases and if that knowledge was associated with fear and distress. We asked the participants to indicate how many people had coronavirus in their country (approximately), how many people were positive at that time, how many people had died since the beginning of the COVID-19 emergency, and how many people had been infected with coronavirus in their place of residence. We noticed that many participants did not respond to these questions, or that some of them submitted distorted data. Very few participants submitted correct answers or something that was close to the correct statistics. This could depend, as mentioned in the introduction, on news fatigue or apathy [19], or simply on the fact that people do not remember data that changes continuously. However, people had some idea about how widespread the virus was, especially in their area of residence, which could more or less correspond to the real situation. From Table 2, we can see that the respondents in Serbia perceived the lowest level of spread of COVID-19 in their district, and consequently, they also had a lower level of fear of COVID-19 than the respondents in Italy and Romania. The official mass media in Serbia had reported on the relatively small number of positive cases at that time, which probably contributed to a lower degree of fear of the pandemic than in the other two countries. In addition, our results confirmed a significant relationship between these estimations and fear of COVID-19 in all three countries. As expected, people in Italy had the highest level of distress at the time of the study, given the dramatic consequences of the pandemic during the lockdown when Italy had the highest percentage of positive cases and the highest mortality rate in Europe and worldwide. Instead, the level of distress was lower in Romania in comparison to Serbia and Italy, and also the tendency to worry about what could help keep distress under control. When we look at the correlations (Table 3), we can see that fear of COVID-19 was strongly associated with distress in all the countries considered. The same was also true for the tendency to worry, which is also strongly correlated with fear of COVID-19. We found in all three countries significant correlations between economic difficulties and distress (and with the tendency to worry), and between negative household climate and distress (and again with the tendency to worry). Consistent with our hypothesis, distress in the countries considered was significantly predicted by the level of fear ofCOVID-19, and above all, by the tendency to worry. It is congruent with suggestions in previous studies that a chronic tendency to worry is a risk factor which strongly contributes to non-adaptive psychological responses to traumatic events and stressors [25,69], as also confirmed in simple slope analyses. In Serbia, the relationship between fear of COVID-19 and distress was significant only for people with a high tendency to worry, whereas in the other two countries, it was also significant when people had a low tendency to worry. It means that the relationship between fear and stress was strong and also existed independently from the tendency to worry. Social support from family played a positive role in stress reduction in Serbia and in Romania, although not a very strong one, which is similar to the results obtained in several other studies [70]. That effect was not significant in Italy, and it was probably a consequence of the perceived vulnerability within families caused by the first and most dramatic lockdown in Italy. However, we found that social support from family moderated the relationship between fear of COVID-19 and distress. That relationship was stronger when people did not perceive support from family. Furthermore, we found a significant effect of household climate on distress in two samples (Italy and Serbia), but not in Romania. Participants who indicated that household interactions were characterized by conflict and manifestations of contempt felt more distressed. It has been shown that the level of distress, at least in the case of our study, can be explained very little by the socio-demographic variables considered (age, gender, level of education, and marital status). Conclusions Our primary aim in this research was to explore some of psychological and socioeconomic predictors of distress during the COVID-19 pandemic. In addition, we found that fear ofCOVID-19 had a strong effect on distress. What is clear from our three studies is that a tendency to worry as a dispositional psychological characteristic is a strong and positive predictor of negative stress (distress), over and above other selected predictors. The distress during the COVID-19 pandemic is higher for people who are chronic worriers, and it is lower for people who have low levels of worry. It can be concluded, without any doubt, that general dispositional tendencies, in this case, the tendency to worry, are clearly manifested in crises such as the current one connected with COVID-19, and that such personality tendencies contribute to greater distress in individuals. Particularly surprising is the relatively low degree of importance of social support from friends in overcoming distress. Limitations of This Study Although these findings are important for understanding the interplay between different personal and social factors that could have some protective or risk role in experiencing distress, there are some limitations that should be noted. These correlational data do not allow for conclusions that might be related to cause and effect. Our data could also be analyzed through a hypothetical model in which fear of COVID-19 could be a mediator between the estimation of the spread of COVID-19 and distress. The survey involved primarily young people and those who are familiar with the use of online platforms and social networks. Additional studies should ensure more representative involvement across the population. An additional limitation of the study stems from the non-longitudinal design. The research was undertaken during the first wave of the COVID-19 pandemic. The highly uncertain situation of a prolonged pandemic crisis poses additional challenges regarding its consequences. Therefore, follow-up studies in different phases of the ongoing pandemic are needed. Finally, the impact of other variables was not considered due to the time limitations on an online survey. The selected variables explained about 50-67% variance of distress which indicates that, in future studies, we should consider other risk or protective factors in order to create recommendations for improving preventive programs and policies. Institutional Review Board Statement: This research was approved by the Ethics Committee at the first author's department. All procedures performed in this study involving human participants were per the said committee's ethical standards and/or national research committee. Informed Consent Statement: Informed consent was obtained from all individual participants included in this study. Data Availability Statement: The datasets pertaining to this study are available from the corresponding author upon request. Conflicts of Interest: The authors declare no conflict of interest.
2021-07-13T13:28:22.625Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "210c7bff7c2a5efaf42d5a91f4d5e0d14f75a19c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/14/7261/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "460df757654cbfdb270cd06ac35fd704e462d622", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
224804170
pes2o/s2orc
v3-fos-license
Star-Forming Galaxies at Cosmic Noon Ever deeper and wider lookback surveys have led to a fairly robust outline of the cosmic star formation history, which culminated around z~2 -- a period often nicknamed"cosmic noon."Our knowledge about star-forming galaxies at these epochs has dramatically advanced from increasingly complete population censuses and detailed views of individual galaxies. We highlight some of the key observational insights that influenced our current understanding of galaxy evolution in the equilibrium growth picture: $\bullet$ scaling relations between galaxy properties are fairly well established among massive galaxies at least out to z~2, pointing to regulating mechanisms already acting on galaxy growth; $\bullet$ resolved views reveal that gravitational instabilities and efficient secular processes within the gas- and baryon-rich galaxies at z~2 play an important role in the early build-up of galactic structure; $\bullet$ ever more sensitive observations of kinematics at z~2 are probing the baryon and dark matter budget on galactic scales and the links between star-forming galaxies and their likely descendants; $\bullet$ towards higher masses, massive bulges, dense cores, and powerful AGN and AGN-driven outflows are more prevalent and likely play a role in quenching star formation. We outline emerging questions and exciting prospects for the next decade with upcoming instrumentation, including the James Webb Space Telescope and the next generation of Extremely Large Telescopes. Background Star-forming galaxies at redshift z ∼ 2, 10 billion years ago, trace the prime formation epoch of today's massive disk and elliptical galaxies. Our knowledge about their properties, and their place in the global context of galaxy evolution, has undergone spectacular advances in the past two decades from both increasingly complete population censuses at ever earlier cosmic times and increasingly detailed descriptions of individual systems. The identification and characterization of galaxies according to their global colors, stellar populations, structure and morphologies, and environment is now routinely done out to z ∼ 3, encompassing 85% of the Universe's history. Comprehensive surveys of the kinematics and interstellar medium (ISM) properties have been obtained from spatially-and spectrally-resolved observations of ionized gas line emission out to z ∼ 3 − 4. The cold gas content has been measured, and is being resolved on subgalactic scales for rapidly rising numbers. Growing samples at z ∼ 4 − 8 are being assembled and the first candidates have been identified at z ∼ 9−11 within 500 Myr of the Big Bang, yielding insights into the progenitor populations the distributions and kinematics of stars, gas, and metals in and around galaxies, unraveling vital phases of the baryon cycle and the interplay between baryons and dark matter. With transformative boosts in sensitivity and angular resolution afforded by JWST and the extremely large telescopes, galaxy evolution at z > 1 will be charted with unprecedented completeness well into the epoch of reionization and with unrivaled sharpness down to the 100-pc scale of individual giant star-forming complexes -a landscape revolution akin to the advent of the Hubble Space Telescope (HST ) and the first 8 m-class telescopes in the 1990s. Of the remarkably rich observational harvest of the past 5-10 years, we can here only highlight select aspects that have been among the most influential in advancing our knowledge about z ∼ 2 SFGs. We focus on the internal properties of galaxies as revealed by diagnostics in emission and typical environments found in deep extragalactic fields, which comprise the bulk of the galaxy population. Section 2 presents the observational landscape. Section 3 discusses global properties providing the population context and enabling evolutionary links, and Section 4 zooms on resolved properties providing insights into the physics shaping galaxies. Section 5 discusses subpopulations of SFGs with extreme properties. Section 6 briefly comments on the theoretical landscape. In closing, Section 7 summarizes the article, and outlines open issues and future observational opportunities. For simplicity, we refer throughout to the 1 ∼ < z ∼ < 3 epochs as "z ∼ 2" or "high z" unless explicitly stated otherwise. We adopt a Λ-dominated cosmology with H0 = 70 km s −1 Mpc −1 , Ωm = 0.3, and ΩΛ = 0.7. For this cosmology, 1 corresponds to 8.4 kpc at z = 2. Magnitudes are given in the AB photometric system. Where relevant, galaxy masses and star formation properties are adjusted to a common stellar initial mass function (IMF). IMF: Initial mass function of stars. OBSERVATIONAL LANDSCAPE The dramatic advances in our knowledge about galaxies at cosmic noon have been driven by the confluence of novel observational techniques and sensitive high-multiplex ground-and space-based instrumentation across the electromagnetic spectrum. The concentration of multi-wavelength campaigns in select fields targeted as part of the Great Observatories Origins Deep Survey (GOODS), the Cosmic Evolution Survey (COSMOS), the All-wavelength Extended Groth strip International Survey (AEGIS), and the UKIDSS Ultra-Deep Survey (UDS) have yielded rich data sets and have seen their legacy value fully realized by providing samples of choice for many detailed follow-up studies. Several reviews have covered various aspects of z > 1 galaxy surveys in the past decade (notably Shapley 2011, Glazebrook 2013, Madau & Dickinson 2014, Lutz 2014, Conselice 2014, Tacconi et al. 2020. This Section gives an update incorporating recent programs with the goal of highlighting the observational underpinnings of our current physical understanding of cosmic noon galaxies. Our empirical knowledge rests on a ladder going from the identification of galaxies from large photometric samples and their spectroscopic confirmation enabling statistical descriptions of the population, to increasingly detailed studies of subsets from spectrally/spatially resolved data. Observations at optical to near-IR wavelengths form a major part of each step, probing the redshifted, rest-frame UV to optical emission from z ∼ 2 galaxies. Figure 1 identifies salient spectral features on a model spectrum created for an example SFG at z = 2.3 and shows how they shift across the various atmospheric bandpasses from z = 1 to 3. These features include (with rest wavelengths given inÅ): • hydrogen recombination and atomic forbidden emission lines from warm ionized gas MgII [OII] [O II I] 43 63 Hβ [OIII] Hα + [NII] [SII] Figure 1 Left: Synthetic spectrum of an illustrative SFG at z = 2.3 plotted in observed wavelength vs. logarithmic flux density units. The spectrum was produced using the code BAGPIPES for the properties of a Milky Way-mass progenitor galaxy based on the scaling relations and evolution thereof discussed in Section 3. Rest-frame far-UV absorption lines were incorporated with relative strengths based on Steidel et al. (2016). Right: Observed wavelengths of salient emission and absorption features (identified on the spectrum and described in the text) as a function of redshift from z ∼ 1 to ∼ 3. In both plots, the dark-to-light grey shading scales with increasing atmospheric transparency computed with the European Southern Observatory (ESO) SkyCalc tool (http://www.eso.org/observing/etc/skycalc/skycalc.htm), and the main photometric bandpasses are indicated at the bottom of the left panel. excited by star formation, AGN, and shock activity, which provide diagnostics of nebular conditions, dust attenuation, galaxy dynamics, and gas outflows (Lyα λ1216, Hβ λ4861, Hα λ6563, [OII] • stellar continuum emission, encompassing the Balmer discontinuity at 3646Å and the 4000Å break caused by hydrogen and multiple metallic species and molecules in the atmospheres of intermediate-to low-mass evolved stars, and on which estimates of the stellar age, stellar mass, and dust reddening are based; • a rich suite of far-UV (∼ 1200 − 2000Å) interstellar low-and high-ionization atomic absorption lines useful to trace gas outflows/inflows, alongside various other absorption and emission features from stellar photospheres and winds, and gas photoionized by hot stars and AGN (including SiIIλ1260, the blend OI+SiIIλ1303, CIIλ1334, SiIVλλ1393,1402, CIVλλ1548,1550, FeIIλ1608, AlIIλ1670); • weaker but important interstellar MgIIλλ2796,2803 absorption (another common ISM and outflow diagnostic) and the faint auroral [OIII]λ4363 line (a temperature-sensitive indicator in direct-method gas metallicity estimates). Figure 2 illustrates the ladder of surveys in terms of spectral resolution vs. the number of galaxies within the 1 < z < 3 interval of interest for this article. The full list of surveys and main references are compiled in the Supplemental Tables 1 and 2. Photometric Surveys in the Optical to Near-/Mid-infrared Imaging in multiple photometric bandpasses is the most efficient way to identify and characterize large numbers of galaxies over a wide redshift range. Imaging campaigns at optical to mid-IR wavelengths (λ obs ∼ 0.3 − 8 µm) with sensitive cameras at ground-based telescopes and from space with HST and the Spitzer Space Telescope (hereafter Spitzer ) have provided Figure 2 Overview of selected optical/near-IR surveys covering 1 < z < 3 as a function of number of sources in this interval and spectral resolution (see list in Supplemental Tables 1 and 2). The color-coding indicates the primary type of observations: photometric imaging (light grey); photometric imaging including high-resolution HST data (blue) and subsets thereof with useful (i.e., S/N > 3) medium-/narrow-band data (dark blue); slitless grism data from HST (cyan); optical spectroscopy (green); near-IR spectroscopy (yellow); near-IR IFU data (red; detailed in Figure 3). Different symbols distinguish surveys of gravitationally lensed targets and/or areas (diamonds) from unlensed ones (circles); for the HFF, the full survey (including parallels) and subsets magnified by µ > 2 and µ > 10 are plotted. The inset shows the combined redshift distributions, grouped and color-coded by observations type, normalized by the total number of 1 < z < 3 galaxies, and with fractions on a logarithmic scale. Given the very heterogeneous nature of the samples (depth, detection/selection function, etc.), the histograms merely serve to illustrate the typical relative distributions. The overall drop with increasing z, smoothest for photometric and grism surveys, largely reflects the flux limits; the turn-up at z ∼ > 3 for photometric-only surveys is driven by efficient Lyman-break dropout identification in optical surveys. Key spectral features falling between atmospheric windows cause the z gaps for ground-based spectroscopic and IFU surveys. the most extensive censuses of distant galaxies. At z ∼ 2, the multi-color information is primarily sensitive to the shape of the stellar continuum modulated by interstellar dust. The spectral energy distribution (SED) of galaxies is used to derive photometric redshifts (z phot ) and basic properties such as stellar mass and SFR (for techniques, see Salvato et al. 2019 andConroy 2013, respectively, andSupplemental Text). SED: Spectral energy distribution. z phot : Photometric redshift, based on the broad/medium/ narrow-band SED. R = λ/∆λ: Spectral resolution given as the ratio of wavelength to the full-width at halfmaximum of a filter bandpass or spectral line spread function. The inclusion of near/mid-IR wavelengths has been crucial to the inventory of the full population by detecting red optically-faint galaxies, probing wavelengths where outshining by young hot stars and attenuation by dust are reduced, and allowing to better trace the light from cooler stars that dominate the stellar mass. At z ∼ 2, near-IR data are particularly important to gain leverage from the fairly sharp Balmer/4000Å continuum breaks. Photometry in broad bandpasses is most sensitive but delivers coarse spectral resolution with typically R = λ/∆λ ∼ 5 − 10. The addition of medium-band (R ∼ 10 − 20) and narrow-band (R up to ∼ 100) information has proven vital to improve the accuracy and reliability of photometric redshifts and galaxy parameters (e.g., Ilbert et al. 2009). In the GOODS-S and COSMOS fields, with most extensive photometry in ∼ 40 optical to mid-IR bands, z phot estimates are as good as ∼ 0.01 − 0.05 × (1 + z), with ∼ < 5% of catastrophic outliers (e.g., Skelton et al. 2014. Because of the wide variety of galaxy SEDs, the accuracy depends on galaxy type, redshift range, specific set of filters, observational depth, treatment of line emission contributions, and availability of spectroscopic redshifts to calibrate the z phot . Nonetheless, the wider wavelength coverage and finer SED sampling in many survey fields has brought decisive improvements. The tracking of similar rest-frame wavelengths across a broad range of redshifts allows more consistent comparisons of galaxy properties at different cosmic times. By better encompassing the full diversity of galaxy SEDs, more complete samples can be selected on the basis of photometric redshifts rather than color criteria involving a few bandpasses devised to isolate specific populations, or of more fundamental galaxy parameters such as stellar mass rather than brightness in a given filter with important k-corrections. As a result, more robust distribution functions in terms of intrinsic galaxy properties and the evolution thereof have been derived, such as rest-frame luminosity functions and stellar mass functions. Multi-band 0. 1 − 0. 2 resolution imaging with HST has been increasingly exploited to not only detect distant galaxies and characterize their sizes and morphologies on ∼ 1 kpc scales, but also to derive maps of stellar properties from resolved color information. Here, the CANDELS survey ) played a prominent role, bringing new sensitive near-IR and optical imaging over ∼ 800 arcmin 2 distributed in five premier sky regions within the GOODS-S and N, COSMOS, AEGIS, and UDS footprints. Together with imaging from other HST programs, this created a multi-tiered data set from ultra-deep (5σ depths of ∼ 29 − 30 mag) full 9-band imaging over ∼ 5 arcmin 2 (Illingworth et al. 2013), through deep (∼ 125 arcmin 2 ) and wide (∼ 800 arcmin 2 ) 4−7-band imaging to typical 5σ depths of ∼ > 27 mag, to the widest areas from the I-band 1.7 deg 2 mosaic as part of COSMOS (∼ 28 mag, 5σ; Scoville et al. 2007a) and H-band imaging of a 0.66 deg 2 subarea (∼ 25 mag, 5σ) largely from the COSMOS-DASH program (Mowla et al. 2019b). The deepest pencil-beam surveys, reaching 29 − 30 mag or fainter in areas magnified through gravitational lensing by massive foreground galaxy clusters (e.g., the Hubble Frontier Fields (HFF); ) probe z ∼ 2 galaxies down to ∼ < 0.01 L and masses well into the dwarf regime. At the other end, some recently undertaken very wide-area surveys such as the optical+near-IR KiDS+VIKING ) and optical Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; are deep enough to already yield ∼ 10 6 sources at z ∼ 2 and ∼ > 0.1−1 L in the first few hundreds of square degrees mapped. L : characteristic value of the galaxy luminosity distribution described by a Schechter function: Φ(L) = (φ /L ) × (L/L ) α e −L/L . See Marchesini et al. (2012) and Parsa et al. (2016) for restoptical and UV luminosity functions out to cosmic noon. Spectroscopic Surveys in the Optical to Near-infrared zspec: Spectroscopic redshift, based on a spectrum (typically at R > 200). S/N: Signal-to-noise ratio. Spectroscopic redshifts (zspec) are essential to validate and optimize z phot techniques, construct the most precise galaxy distribution functions from confirmed samples, and provide secure targets for detailed and time-consuming follow-ups. Spectroscopy at R > 200 is adequate to measure redshifts to within ∼ 300 km s −1 or better from ISM emission lines and/or from stellar absorption features. To be secure, zspec's rely on the identification of at least two spectral features 1 , and the success also depends on the signal-to-noise ratio (S/N) of the data, the wavelength range probed, and the galaxy type. For instance, it is easier to measure the redshift of a source with higher emission line or continuum surface brightness, introducing a notorious bias towards bluer, more compact, more star-forming galaxies at z ∼ > 1.5 in optical spectroscopic surveys. The challenges of confirming large samples at z ∼ 2 are manifold. The galaxies are faint. At z = 2, L UV corresponds to R ∼ 24.5 mag and L V to H ∼ 22.3 mag, often necessitating long integrations to reach a sufficient S/N for zspec measurements even with 8 m-class telescopes. Absorption and emission features observable in the optical are typically weak. The stronger nebular emission lines are shifted into the near-IR regime that is plagued by a dense forest of > 1000 bright and variable emission lines mostly from OH radicals excited in the upper atmosphere, broad intervals of low atmospheric transmission around 1.4 and 1.9 µm, and thermal background from instrument to infrastructure and atmosphere at λ obs > 2 µm. z grism : Redshift from grism spectroscopy, here specifically from HST R ∼ 130 grism data supplemented with photometric SEDs. In the optical, great progress has come from high throughput multi-object spectrographs (MOS) such as Keck/LRIS and DEIMOS and VLT/VIMOS and FORS2,optimized to extend bluewards to the atmospheric cutoff near 3000Å or redwards to ∼ 1 µm to overcome the "redshift desert." The more recent arrival of sensitive cryogenic near-IR MOS, including Keck/MOSFIRE and Subaru/FMOS and MOIRCS, further expanded confirmed z ∼ 2 samples mainly through rest-optical emission lines. Near-IR observations from space have an obvious advantage and use of the HST /WFC3 grism G141 with R ∼ 130 has been very productive at yielding redshifts. The lack of atmosphere ensures continuous coverage of the full λ obs = 1.1 − 1.7 µm grism window and greatly enhances continuum sensitivity, reducing biases towards line-emitting sources. The slitless aperture maximizes multiplexing and avoids target pre-selection biases, with the added ability to map spectral features at HST 's angular resolution. Reliable grism redshifts (zgrism) from the 3D-HST and AGHAST programs , for instance, have nearly tripled the number of 1 < z < 3 galaxies with secure spectroscopic redshifts in the five CANDELS fields, with typical zgrism accuracy of 0.003 × (1 + z) (∼ 1000 km s −1 at z ∼ 2) at JH ≤ 24 mag, and only 2 − 3× worse for the subset of quiescent galaxies. Besides redshift, spectra also provide a wealth of information on the stellar, gas, dust, and AGN content of galaxies. Detailed information is more demanding in terms of S/N and spectral resolution, to measure accurate emission and absorption line strengths and profiles for a range of fluxes and equivalent widths, and to deblend spectral features (e.g., [OII] and [SII] doublets, or kinematic components such as host disk and gas outflow). Among many results from MOS surveys at z ∼ 2, scaling relations have been constructed such as the MS using SFR estimates from Balmer lines or UV luminosities, and the mass-metallicity(-SFR) relationship from strong line diagnostics of the gas-phase oxygen abundance. Excitation sequences in nebular line ratio diagrams have been examined to characterize the evolving ISM conditions at high z. Galaxy kinematics have been investigated from integrated line widths and, with data subsets of sufficient spatial resolution and suitable slit alignment, from velocity gradients. The demographics and energetics of galactic outflows have been investigated from the strength and velocity profile of rest-UV interstellar absorption and rest-optical nebular line emission. In addition, stellar and dynamical properties of smaller but important samples of massive quiescent galaxies have been constrained from absorption (and in some cases weak emission) features -valuable to establish the fate of massive SFGs from their likely immediate descendants. These results are discussed throughout Sections 3, 4, and 5. Integral Field Spectroscopic Surveys Imaging spectroscopy at R ∼ > 2000 arguably provides the richest datasets of individual sources -a large multiplexing of its own. Integral field unit (IFU) spectroscopy is the most efficient technique -collecting simultaneously the full three-dimensional (3D) spatial and spectral information -and became possible for z ∼ 2 SFGs (with typical Hα fluxes of ∼ 10 −16 erg s −1 cm −2 or fainter) with sensitive near-IR IFU instruments at 8 m-class telescopes. IFU studies have so far mainly used Hα+[NII] line emission (or [OIII]+Hβ at z ≥ 2.7) to map the internal gas motions of galaxies, the distribution of star formation, gas excitation, and ISM metallicities within them, and the extent and properties of the gaseous winds they expel. Key results on these topics are covered in Sections 3 and 4. First samples were obtained with single-IFUs including VLT/SINFONI, Keck/OSIRIS, and Gemini/NIFS, all with resolving powers of R ∼ 2000 − 5000 and designed to be fed by an adaptive optics (AO) system improving the angular resolution from typical near-IR seeing of ∼ 0. 5 − 0. 7 at their sites to the diffraction limit of their host telescopes (∼ 50 − 60 mas at λ obs = 2 µm). To date, near-IR single-IFU samples amount to ∼ 400 targets altogether, with roughly half of these sources observed in AO mode. These samples, all drawn from spectroscopically-confirmed subsets of parent photometric samples with diverse primary selection criteria (magnitudes, colors, narrow-band identification, strong lensing) form a heterogeneous collection probing different parts in z − M − SFR space (e.g., Glazebrook 2013, and references therein). Larger and more complete surveys have been enabled with the advent in 2013 of KMOS at the VLT, with 24 IFUs deployable over a 7 -diameter patrol field. KMOS operates in natural seeing, covers 0.8 − 2.4 µm with four bandpasses at R ∼ 4000 each, and is well suited to detect faint, extended line emission over a wide redshift span. With > 2000 SFGs targeted so far, KMOS has put results from single-IFU work on a more robust statistical footing (e.g., . Importantly, it has also allowed to push into regimes previously unexplored with IFUs, including line emission of massive sub-MS galaxies (Belli et al. 2017b), and continuum for stellar populations and kinematics of massive quiescent field and cluster galaxies (e.g., Mendel et al. 2015, Beifiori et al. 2017. Figure 3 illustrates the observational and galaxy parameter space of the main near-IR IFU surveys of rest-optical line emission. The log(M /M ) ∼ > 9.5 SFG population is extensively covered; detections also extend to SFRs ∼ 10× or more below the MS, and to log(M /M ) ∼ < 9 preferentially above the MS. While optical IFU spectroscopy is more rele- Overview of near-IR IFU surveys of line emission of z ∼ 1 − 3 galaxies in observational and galaxy parameter space (see list in Supplemental Table 2). Left: The surveys are represented in terms of the average number of spectral bands covered per target, median on-source integration time per band (on a logarithmic scale), and number of objects targeted (with symbol area proportional to this number). Samples of field galaxies observed in natural seeing or with AO, and lensed galaxies in either mode, are plotted as circles, squares, and diamonds, respectively. The color coding denotes the median redshift of the samples. Right: Distribution of detected targets in stellar mass vs. MS offset for samples where these properties are available. Different colors and symbols are used to show the redshift, and differentiate field vs. lensed galaxies, and no-AO vs. AO data, as labeled in the plot. The underlying distribution of galaxies in a similar z range to H 160 = 26.5 mag from the 3D-HST catalog is shown in greyscale for comparison. The histograms compare the projected distributions in M of field galaxies observed in natural seeing (scaled by ×1/7, in purple), field galaxies observed with AO (green), and lensed galaxies (yellow). lar and AGN radiation output from galaxies in the form of thermal emission (see Lutz 2014, for a review). Spitzer /MIPS has delivered the deepest views of the dusty ISM of cosmic noon galaxies through 24 µm observations, enabling the detection of individual galaxies at z ∼ 2 down to SFR ∼ 10 M yr −1 (e.g., Whitaker et al. 2014). At these redshifts, however, 24 µm data measures rest-frame 8 µm light, where warm and transiently heated dust in HII regions and around AGN, polycyclic aromatic hydrocarbons arising from photodissociation regions, and absorption by silicate dust contribute. The conversion to total IR luminosity and SFR is thus prone to important uncertainties from the SED that needs to be assumed for the large extrapolation over the far-IR dust emission peak typically around rest-frame 100 µm (corresponding to a characteristic dust temperature of ∼ 30 K). Measurements with submm instruments (e.g., JCMT/SCUBA and APEX/LABOCA) provided useful constraints on the Rayleigh-Jeans side, where AGN heating is minimized. The wavelength coverage and sensitivity afforded by Herschel has been vital in sampling directly the far-IR SED peak, enabling robust calorimetric estimates of galaxy SFRs (and cold dust properties). 2.4.2. Submm to mm observations. Observations in the submm to mm regimes probe the cold ISM component in galaxies. Its main constituent, H2, lacks a permanent electric dipole moment hence relevant emission lines at low excitation temperatures. Therefore, the strong (Sub)mm: Submillimeter and millimeter. rotational lines of CO (the second most abundant molecule) are used to trace molecular gas properties and kinematics of galaxies, with mid-J transitions (2-1, 3-2, 4-3) being commonly employed at z ∼ 2. Molecular gas masses (hereafter simply Mgas) are estimated via an excitation correction to the ground-state 1-0 line and a conversion from CO line luminosity to H2 mass (e.g., Bolatto et al. 2013. The CO 1-0 transition is typically fainter than mid-J lines and is shifted into the high-frequency radio bands, accessible for instance with the JVLA. The cold dust continuum luminosity is a viable and observationally efficient proxy for the gas mass and spatial distribution (e.g., Scoville et al. 2017. Great strides in cold ISM studies of z > 1 galaxies have been made possible with the IRAM/NOEMA interferometer in the northern hemisphere and ALMA in the south. With the gains in sensitivity and angular resolution of these arrays, studies of the global cold ISM content have shifted from the most luminous "submm galaxies" to the more typical MS population, although substantial integration times are still needed especially for CO line measurements and the limited primary beam sizes hamper mapping of sizeable areas (see recent reviews by Combes 2018, Tacconi et al. 2020. Pointed CO or continuum surveys have been most efficient at assembling sets of ∼ 10 − 100 at z ∼ 2 drawn from well-characterized parent samples. Blank field mosaicking surveys have been undertaken to build censuses out to z ∼ 4, either optimized for emission line searches through spectral scans or emphasizing dust continuum emission, yielding so far a few to several 10's of secure detections and with counterparts in the (deep) optical to mid-IR imaging available in the survey fields. Most of the CO and dust continuum measurements at z ∼ 2 are for massive log(M /M ) ∼ > 10 − 10.5 SFGs. Detection in less massive (unlensed) galaxies becomes increasingly difficult as the amount of gas gets lower, and the ISM metallicity drops leading to more extensive UV photodissociation of CO and lower dust abundances. Alternative cold ISM tracers in emission are not practical because of their weakness, or because their higher frequency make them difficult or impossible to access from the ground at these redshifts. An obvious avenue for the future, in reach of NOEMA and ALMA, is more systematic spatially-resolved sub-arcsec CO and dust mapping at z ∼ 2, which is currently limited to fairly small heteregenous sets dominated by very luminous or massive galaxies (e.g., Tacconi et al. 2013, Silverman et al. 2015, Barro et al. 2017b, Tadaki et al. 2017a. 2.4.3. Radio observations. At longer radio wavelengths, continuum observations probe AGN and star-forming systems mainly through non-thermal synchrotron emission, free from dust/gas obscuration. In SFGs, the synchrotron emission is produced in supernova remnants and, towards higher frequencies, free-free emission from HII regions also contributes (Condon 1992). In AGN sources, the origin is more diverse, including jets, hotspots, and large-scale lobes, which complicates the quantitative relationship between observed radio emission and AGN luminosity (e.g., Tadhunter 2016). Surveys at 1.4 − 5 GHz and lower frequencies down to ∼ 200 MHz (λ ∼ 6 − 150 cm) with facilities such as the JVLA, VLBA, LOFAR, GMRT, GBT, and ATCA have been carried out in many cosmological deep fields, with a range of sensitivities, beam size, and area. AGN dominate at brighter flux densities while SFGs become increasingly important at sub-mJy levels. Given that the tight radio-IR luminosity correlation for SFGs holds out to at least z ∼ 3, with fairly well constrained (mild) evolution (LIR/L1.4 GHz ∝ (1 + z) α , and α in the range −0.1 to −0.2), the radio flux density can serve as SFR estimator, and the radio excess above the correlation can be used as diagnostic for the presence of an AGN (e.g., Magnelli et al. 2015, Delhaize et al. 2017). The deepest GHz-regime VLA imaging at ∼ 1 resolution (in AEGIS, GOODS-N, COS-MOS; Ivison et al. 2007, Morrison et al. 2010) reaches 5σ sensitivities of ∼ 10 − 25 µ Jy, corresponding to SFRs ∼ 100 M yr −1 at z ∼ 2. 2.4.4. X-ray observations. At the other end of the spectrum, observations of X-ray radiation (0.5 − 100 keV) in galaxies trace predominantly nuclear activity (e.g., Brandt & Alexander 2015). Produced in the immediate vicinity of the SMBH via Compton up-scattering in the accretion-disk corona, in powerful nuclear jets, and via Compton reflection and scattering interaction processes with matter throughout the nuclear regions, X-rays are able to penetrate through substantial gas columns (becoming hindered in the highly Compton-thick regime with NH 1.5×10 24 cm −2 ). Non-AGN X-ray emission in galaxies arises from X-ray binaries and hot gas but is both less energetic and softer compared to that of (luminous) AGNs. The most extensive cosmological surveys have been carried out with the spaceborne Chandra and XMM-Newton Observatories, operating since 20 years, with on-board instruments enabling efficient spectroscopic imaging of wide areas in soft and hard bands (∼ 0.2 − 2 and 2 − 10 keV), while the more recently launched NuSTAR telescope has started to unveil the distant universe in ∼ > 10 keV radiation. The deepest and sharpest views were achieved with Chandra/ACIS through the cumulative 7 Ms exposure of ∼ 485 arcmin 2 in the Chandra Deep Field-South (encompassing GOODS-S), yielding nearly 1000 detections to z ∼ 4.5. While AGN dominate the source counts, the more so at higher z and luminosities, the depth of the data reaches intrinsic rest-frame 0.5 − 7 keV log(LX/erg s −1 ) ∼ < 42.5 out to z ∼ 3 (Luo et al. 2017), where SFRs from several 100 to 1000 M yr −1 can be detected. Because the rapid variability of AGN emission at these energies and the potential presence of high absorbing gas columns near the nucleus can bias X-ray samples, AGN identification benefits from complementary diagnostics such as high-excitation rest-UV/optical emission lines, radio luminosity, and mid-IR colors (e.g., Padovani et al. 2017). Mass-matching vs. Abundance-matching In order to bring theoretical models and observations into the same arena for an applesto-apples comparison, a common interface needs to be found. Different approaches can be employed, bringing this interface either very close to the direct observables (e.g., by treating numerical simulations with radiative transfer and/or placing them into a lightcone to predict number counts as a function of observed flux) or alternatively working from the observables backward to interpret them in terms of physical quantities (e.g., using the spectral modeling techniques outlined in the Supplemental Text). Once stellar population properties such as stellar masses are inferred from the multi-wavelength SEDs, and provided sufficient depth, mass-complete samples of galaxies can be extracted from the flux-limited parent catalog. Those in turn can serve as basis for population-averaged comparisons to models, where for example the evolution of the SFR, size, metallicity, rotational velocity or other physical quantity is traced as a function of redshift at fixed stellar mass. How far back the population-averaged evolution can be recovered depends on the mass regime considered, as the parent catalog's flux limit will necessarily impose a redshift-dependent mass completeness limit. While valuable, such population censuses do not by themselves reflect the growth histories of individual systems. Galaxies gain stellar mass through star formation and merging activity, moving out of the considered mass bin while others move in. Methods to empirically reconstruct evolutionary sequences for individual galaxies from the mass-complete STELLAR, GAS, AND STRUCTURAL PROPERTIES M , Mgas, M bar : Total stellar, cold molecular gas, and baryonic (stellar+gas) masses. Schechter function: Parametrization of the galaxy number density vs. stellar mass (or luminosity), -or double-Schechter); the characteristic value M is referred to as the Schechter mass (Schechter 1976). sSFR: specific star formation rate, SFR/M . ∆MS: Logarithmic offset in sSFR (or SFR) from the MS, log(sSFR/sSFRMS(M , z)). SFH: Star formation history; common forms include an exponential ∝ e −t/τ , delayed ∝ te −t/τ , or log-normal where t, τ , and t0 are the time, timescale, and logarithmic delay time. Sérsic profile: Frequently used parametrization of the surface density distribution of galaxies, Σ(r) = Σ(Re) exp −bn (r/Re) 1/n − 1 , where n is the Sérsic index, and bn is a scaling coupled to n such that half of the total light is within Re (e.g., Graham & Driver 2005). The Gaussian, exponential, and de Vaucouleurs profiles correspond to n=0.5 and bn=0.69, n=1 and bn=1.68, and n=4 and bn=7.67, respectively. Re: Effective radius, enclosing half the total light (or mass). R d : Disk scalelength for an exponential profile Σ(r) = Σ(0) exp(−r/R d ), in which case Re = 1.68 R d . b/a: Projected minor-to-major axis ratio of an inclined disk (also denoted q). R e,circ : Circularized effective radius, scaling Re by b/a. R80: Radius enclosing 80% of the total light (or mass). Σ , Σgas, Σ SFR : Stellar mass, gas mass, and SFR surface densities conventionally within Re, taking half the total M , Mgas, and SFR and dividing by πR 2 e . Σ 1kpc : Stellar mass surface density within the central 1 kpc, M (< 1kpc) / π(1kpc) 2 , where M (< 1kpc) is computed from the best-fit Sérsic profile to the surface density distribution. fgas, τ depl : Gas-to-baryonic mass fraction Mgas/M bar , and gas depletion time via star formation Mgas/SFR. samples, linking their progenitor and descendant phases, gained significant attention in recent years. The most common ansatz is to assume the preservation of mass ranking, in which case progenitors and descendants are anticipated to live at the same comoving number density (e.g., . The resulting evolutionary tracks can then directly be compared to the main progenitor branch extracted from a galaxy formation model. The efficacy of this technique relies in part on the infrequent occurrence of major galaxy mergers, and indeed refinements have been proposed on the basis of cosmological simulations to account for a non-negligible divergence in growth rates, in part influenced by merging activity (van de Voort 2016, , Clauwens et al. 2017. Here, it is of note that slightly different prescriptions are desired for tracing galaxies backwards vs. forward , and that the technique is designed primarily to work when the galaxy population is well described as a one-parameter family characterized by stellar mass. If considering subpopulations defined by, e.g., mass and color, galaxies may not only enter a particular mass bin due to their stellar growth, but also their color evolution, potentially introducing progenitor bias (e.g., . Finally, from the perspective of the flux-limited parent catalog, the abundance matching technique leaves more of the collected data unused, as higher mass cuts are adopted at later times to identify progenitors and descendants, whereas the deepest mass completeness limits are reached at the lowest redshifts. GLOBAL PROPERTIES OF STAR-FORMING GALAXIES AT z ∼ 2 Along with the cosmically integrated evolution of the SFR, stellar mass, and SMBH accretion rate density (Madau & Dickinson 2014), a key outcome of lookback surveys was to reveal and establish the existence of scaling relations between global properties of galaxies out to at least z ∼ 2.5, and a census of how they are populated (often quantified by galaxy type). In what follows, we first address the build-up of stellar mass in galaxies. Section 3.1 considers the scaling relation between the (in-situ) growth rate (SFR) and its time integral (M , including effects of stellar mass loss and merging), followed by an overview of results on census (Section 3.2) and a discussion on the interpretation of these joint observational constraints (Section 3.3). We then expand our scope to include global structural measures (Section 3.4), ISM probes (Sections 3.5 -3.6) and nuclear activity (Section 3.7). 3.1. The "Main Sequence" of Star-forming Galaxies Locally, the existence of a strong correlation between the SFR and stellar mass of galaxies was first established based on the vast number statistics offered by the Sloan Digital Sky Survey (SDSS; Brinchmann et al. 2004). Subsequent work on deep lookback surveys revealed that a similarly tight and near-linear relation, dubbed the "Main Sequence," was already in place since z ∼ 2 (Noeske et al. 2007, Elbaz et al. 2007). Its main change with cosmic epoch is one of rapid zero-point evolution. For galaxies below 10 10 M the specific SFR evolves as sSFR ∝ (1+z) 1.9 whereas more massive galaxies exhibit a faster pace of evolution, with sSFR ∝ (1 + z) 2.2−3.5 for log(M /M ) = 10.2 − 11.2 . The past few years have seen a consolidation of the MS relationship, leading to an emerging picture in which (a) the scatter is constant at 0.2−0.3 dex over the full stellar mass and redshift range probed, (b) the low-mass slope is consistent with unity, and (c) a turnover and flattening is evident at higher masses, most prominently so towards lower redshifts and conversely nearly vanishing by z ∼ 2 (e.g., Whitaker et al. 2014, Tomczak et al. 2016, Figure 4). Some studies favor or adopt single powerlaw fits (Speagle et al. 2014, Pearson et al. 2018, then finding its slope to steepen with increasing redshift. Quantitative differences in derived scatter, slope/shape and normalization can be attributed to a range of reasons, including (1) method and strictness of SFG selection, (2) dynamic range over which the relation is fit, and (3) use of different SFR tracers. We briefly elaborate on these systematics before highlighting the significance of the MS scaling relation. Whitaker et al. (2012) demonstrate how a U V J color selection vs. selecting only blue star-forming galaxies makes the difference between finding a sub-linear vs. linear slope. Similarly, Rodighiero et al. (2014) and Johnston et al. (2015) illustrate how, by adopting different color cuts or selection criteria based on SED-modeled properties, inferred slopes may vary between ∼ 0.8 and ∼ 1. Noteworthy also is that, when restricting samples to pure disks or considering only the disk components of SFGs, a slope of unity is found (Abramson et al. 2014). As we will allude to in Section 3.3, galaxies may well lack a bimodality in their sSFR distribution akin to that seen in their colors, implying that the choice of SFG selection criterion may largely be arbitrary. In this case there is no formally correct answer regarding the MS shape, and inferences on galaxy evolution need to treat the SFG and quiescent population jointly or at least preserve internal consistency in selection criteria used. Unavoidably, the dynamic range in stellar mass over which the MS shape can be constrained is a function of redshift, with for example the ZFOURGE magnitude limit of Ks = 25 mag corresponding to 90% completeness limits of log(M /M ) ∼ 8.5, 9.5 and 10 Evolution of selected galaxy scaling relations and censuses from z = 3 to z = 0.1 (color-coded from red to purple as labeled in the panels). Top row: The first three panels show the MS of SFGs, the stellar mass function for all galaxies and for SFGs and quiescent galaxies (QGs) separately (thick and thin lines), consistently extracted from the same data set (Tomczak et al. 2014(Tomczak et al. , 2016. The rightmost panel plots the molecular gas mass fraction from the scaling relations of Tacconi et al. (2020). Bottom row: The two leftmost panels contrast the relationships between stellar mass, size, and stellar mass surface density within r < 1 kpc for SFGs and QGs, respectively (thick and thin lines; van der Wel et al. 2014a, Barro et al. 2017a). The next panel plots the stellar mass vs. gas-phase metallicity relation estimated via [NII]/Hα from Wuyts et al. (2014) and Zahid et al. (2014) (using the Pettini & Pagel 2004 calibration). The rightmost panel illustrates the evolution of the fraction of rotation-dominated SFGs (with ratio of intrinsic rotation velocity to velocity dispersion > 1) based on ionized gas kinematics (Kassin et al. 2012, Simons et al. 2017). In the various panels, dashed lines indicate extrapolations in M and/or z when no consistent measurements or fits were available. at z ∼ 1, 2 and 3, respectively (Tomczak et al. 2016). Particularly if there is curvature to the MS this can impact recovered parameterizations that adopt a power-law slope. The finite depth of observed SFR tracers further implies that many studies rely at least in part on stacking procedures, which may suffer from confusion biases (Pearson et al. 2018). Even with extreme depth and a consistent SFG definition, determinations of the MS scatter, normalization, and shape will be affected by uncertainties in the inferred SFRs and stellar masses. When the two are derived from overlapping data and sometimes a single modeling procedure, the uncertainties will be correlated and can potentially conspire to an artificially tight relation, compensating the opposite and mostly subtle boosting of scatter due to finite redshift bins that is not always accounted for. A comprehensive discussion of systematic uncertainties affecting estimates of SFR and M is presented by Conroy (2013, see also Supplemental Text, which summarizes the ingredients, assumptions, and challenges of spectral modeling techniques). Possible concerns include the saturation of reddening as a dust attenuation tracer at the highly star-forming and massive end (e.g., Wuyts et al. 2011a), extra extinction towards HII regions which remains difficult to pin down observationally (e.g., Reddy et al. 2015), contamination by other sources of emission such as AGN, circumstellar dust around asymptotic giant branch (AGB) stars or diffuse cirrus dust heated by old stellar populations, and unintended biases induced by the choice of adopted parameterization of and/or prior on the SFH (Carnall et al. 2019). Despite the above considerations, the meta-analysis by Speagle et al. (2014) finds a remarkable consensus among MS observations, with an interpublication scatter as small as 0.1 dex. On an individual galaxy basis different SFR estimates do of course vary more than that, but may not need to agree in detail either, as they can probe different timescales. H recombination lines are the closest to a measurement of the instantaneous SFR because of the short lifetime (∼ 10 Myr) of Lyman-continuum producing OB stars, while rest-UV and IR tracers will integrate the contribution of stars with (stellar) main sequence lifetimes of ∼ 100 Myr. As such, differences in MS scatter inferred from different SFR tracers could in principle encode the short-term stochasticity of star formation and the timescale on which galaxies lose "memory" of previous activity (Caplar & Tacchella 2019). A slightly enhanced scatter around the Hα-based MS at cosmic noon relative to the one constructed from UV or UV+IR based diagnostics has been reported (Shivaei et al. 2015, Belli et al. 2017b), but systematic uncertainties regarding dust corrections make the interpretation in terms of star formation timescales not unique. SFH: Star formation history. Setting aside the above caveats, we conclude this Section by noting two immediate implications of the existence of a MS relation and its observed evolution with cosmic time. First, assuming SFGs are located on the MS at all times we can integrate along the evolving scaling relation to recover the typical star formation history (SFH). Doing so, one unambiguously finds the star formation activity first rises before it falls, as such mimicking the shape of the cosmic SFR density evolution (Renzini 2009, Peng et al. 2010, Leitner 2012, Speagle et al. 2014, Tomczak et al. 2016, Ciesla et al. 2017). In common with findings from the fossil record (e.g., Thomas et al. 2005), these studies also infer SFHs of more massive galaxies to peak earlier. A second point of significance is that the tightness of the MS implies that any large excursions in star formation activity as one might expect from (major) merging have either very short duty cycles or are very rare (Rodighiero et al. 2011). The Stellar Mass Function An extensive body of work has documented the census of galaxies as a function of their stellar mass over most of cosmic history on the basis of well-sampled SEDs for deep, near-IR selected samples (e.g., Ilbert et al. 2013, Tomczak et al. 2014, see Figure 4). In common between these studies are the following findings. First, provided sufficiently deep stellar mass completeness limits a double-Schechter functional form is favored over a single-Schechter fit. This conclusion holds for both the star-forming and quiescent population individually, and for the combined, total galaxy stellar mass function. Second, between z ∼ 2 and the present day there is no statistically significant evolution in the characteristic mass M of either the total stellar mass function or that of SFGs (Peng et al. 2010). Values quoted in the literature for this characteristic mass vary in the range log(M /M ) = 10.6 − 11 with the higher results stemming from single-Schechter and the lower ones from double-Schechter fits (see, e.g., Tomczak et al. 2014). Minor differences further arise from the adopted fitting method (1/Vmax vs. maximum-likelihood), systematics in the determination of redshifts and stellar mass, and how uncertainties in the latter are accounted for. A third conclusion is that little to no evolution in the low-mass slope α ∼ −1.5 is noted since cosmic noon, neither for the total nor the star-forming galaxy population. 2 Most of the evolution over the past 10 Gyr can thus be described by an increase in Φ . The redshift-invariance of the low-mass slope α is in line with a MS slope of unity at masses below log(M /M ) < 10.5. As pointed out by Peng et al. (2010), a sub-linear MS slope would inevitably lead to a fast steepening of α, and only very slight deviations from a unity slope can be accommodated by merging away low-mass galaxies. Whereas until recently inconsistencies at the level of 0.2 − 0.3 dex were found between integrating the MS metric and the evolving stellar mass function , Tomczak et al. 2016, which could not all be accounted for by merging, the latest such exercise with revised SFRs and stellar masses from advanced spectral modeling shows an improved internal consistency . 3 As illustrated in Figure 4, the quiescent galaxy mass function looks markedly different. At all epochs it features a clear peak around M , and the quiescent population grows in numbers more rapidly than the star-forming one. At masses above 10 10 M quiescent number densities have grown by a factor 6 since z ∼ 2, whereas at lower masses there is a 15−30× increase. The mass-dependent growth of the quiescent population, with quenching of low-mass galaxies happening at later times (Ilbert et al. 2013, Huang et al. 2013, has been attributed to two different quenching channels. Since the environmentally driven lowmass channel only manifests itself appreciably after the epoch of cosmic noon, this review focuses on the high-mass quenching dominant at early times. Interpreting the Observed Stellar Mass Growth Whereas observational campaigns of the stellar mass growth across most of cosmic history have tightened the error bars on its scaling relation (the MS) and census (the galaxy stellar mass function), perhaps of more debate today is the interpretation of these observational diagnostics. That is, what are the implications of the cross-sectional view of the galaxy population at a range of epochs for the evolutionary tracks that individual galaxies follow? In this context two schools of thought have developed which, to use the nomenclature of Abramson et al. (2015), can be described as "mean-based" and "dispersion-based" approaches. The former aim to reconstruct the average SFH of individual galaxies based on ensemble averages (e.g., Peng et al. 2010, Behroozi et al. 2013b, whereas the latter put emphasis on the diversity of SFHs (e.g., Gladders et al. 2013, Kelson 2014, Abramson et al. 2015. Both schools infer characteristic SFHs that first rise and then fall, but differ in key aspects of the interpretation. Peng et al. (2010) for example adopt the redshift-invariance of M as an indication that galaxies live on and grow along the evolving MS until they reach this critical mass, after which the probability of quenching increases rapidly ∝ 1 − e −M/M . Proposed mechanisms to explain this "mass quenching" include the rapid expulsion of gas by SMBHs, but there is no consensus yet regarding its physical cause. Scatter around the MS is in such models typically attributed to short timescale (∼ 10 7−8 yr) variations in SFR at a given mass, induced by the breathing cycles of star formation feedback and temporal fluctuations in the rate of gaseous inflows and/or minor mergers. The "dispersion-based" school on the other hand attributes scatter around the MS relation as an imprint of SFHs that are differentiated on Hubble timescales. In this picture galaxies follow smooth trajectories that let them pass across the moving MS, rather than at any time being stochastically scattered around the scaling relation. There is in such a scenario no discernable signature of quenching. That is, no rapid quenching mode and no specific time (other than arguably the peak in SFH) at which a shutdown in star formation is triggered. Along the same vein, Eales et al. (2014) report a continuous distribution of galaxies in the sSFR−M space, lacking a bimodality in specific SFRs as undeniably seen in their color distribution. The color bimodality, they argue, reveals the peculiarities of stellar evolution (i.e., ageing stellar populations saturating in color) rather than a signature of galaxy evolution producing two sharply distinct populations of galaxies. A common interpretation in these studies is that the SFH shape is set by initial density conditions intimately related to dark matter (DM) properties such as the halo formation redshift. A family of log-normal SFHs, parameterized by varying peak times and widths, can yield an adequate description of the relevant observational metrics (Gladders et al. 2013, Abramson et al. 2015, Diemer et al. 2017, although the fact that the central limit theorem produces a similar relation between SFR and stellar mass within a framework in which galaxies grow stochastically illustrates that this inference is not unique (Kelson 2014, Kelson et al. 2016. Speagle et al. (2014) present a hybrid approach in which average SFHs are derived by integrating the MS similar to what was done by Renzini (2009) and Peng et al. (2010), but its scatter is reproduced by imposing an initial spread in formation times to the smooth evolutionary tracks as opposed to adding short-term fluctuations in SFR at a given mass. Turning to numerical simulations of galaxy formation where the individual evolutionary paths of galaxies are by construction known, Matthee & Schaye (2019) argue that the MS scatter contains contributions from (slightly dominant) short-timescale self-regulation of star formation as well as halo-related variations on Hubble timescales. Of course, the precise contribution from short-timescale fluctuations may depend on the detailed recipes implemented in the numerical simulation. A promising path forward to discriminate between the two schools of thought is to look for correlations between the offset from the MS midline and other SFG properties that can be assumed to vary more slowly over time, such as galaxy structure. Absence of a correlation within the MS scatter would then favor a short-timescale origin whereas a correlation between MS offset and longer lasting features would favor a Hubble-timescale differentiation. This requires accurate SFR measurements, where possible contrasting MS offsets quantified using multiple SFR tracers (e.g., Fang et al. 2018), ideally with different timescale sensitivities (Caplar & Tacchella 2019). Given challenges posed in this regard by dust treatment, we conclude that SFRs and stellar masses by themselves may ultimately prove insufficient to recover the underlying evolutionary paths of galaxies. Progress thus entails incorporating the information provided by spatially resolved studies of the build-up of galaxies in all their baryonic components (stars, gas, metals), tied with kinematic tracers of the full gravitational potential (i.e., including DM) and of the feedback processes at play. In the remainder of this Section, we cover the global structure, ISM, and accretion scaling relations, to delve more into resolved properties in Section 4. The Mass-Size Relation Following initial work with HST on the sizes of the UV-selected subpopulation of SFGs (Giavalisco et al. 1996, Ferguson et al. 2004, size evolution of mass-complete samples since cosmic noon was first explored in large numbers using ground-based near-IR surveys (Franx et al. 2008, Williams et al. 2009), to be transformed by rest-optical imaging at high resolution for statistical samples after the installment of WFC3 onboard HST . This Section focuses on stellar light-weighted sizes. Insights gained from a multi-tracer analysis combining stellar mass-weighted sizes and radial distributions of star formation, gas, and dust are covered in Section 4.1. PSF: Point spread function. Even when concentrating on a single tracer/wavelength, multiple definitions of galaxy size are possible, and increasingly explored alongside one another. Different methods classify broadly as parametric and non-parametric. By far the most common approach entails fitting of a parametric (usually Sérsic) functional form convolved with the point spread function to the two-dimensional (2D) surface brightness distribution, and adopting the radius enclosing 50% of the light (a.k.a. the effective radius) as size measure, either defined along the major axis or in circularized form (Re,circ = b/a Re). Variations include quantifying galaxy size based on a different percentile (e.g., R80) or decomposing the light distribution in multiple components (e.g., bulge and disk) with a size associated to each. Non-parametric approaches range from curve of growth analyses to quantifying the pixel area above a given surface brightness threshold. The former requires a center and aperture definition, whereas the latter is designed to function well also for highly irregular morphologies but requires accounting for cosmological surface brightness dimming and luminosity evolution. Unlike the parametric approach that applies forward modeling of point spread function (PSF) smearing, the finite resolution is to be accounted for a posteriori in these non-parametric measures, typically using a simulation-based lookup table as correction factors are size and profile shape dependent. Here, we outline the main inferences from conventional Sérsic fitting, but note in passing how some conclusions change, even on a qualitative level, when adopting an alternative definition of size. The sizes of star-forming and quiescent galaxies both show a tight (< 0.2 dex intrinsic scatter) but distinct scaling with galaxy stellar mass (van der Wel et al. 2014a, see Figure 4). SFGs are larger than their quiescent counterparts at all masses over the 0 < z < 3 range. Their size-mass relation exhibits a non-evolving slope of d log Re d log M = 0.22 compared to the steeper slope of d log Re d log M = 0.75 for early-type galaxies. Considering the redshift dependence of the intercept, a slower evolution in the average size of the population at fixed mass is quantified for SFGs (Re ∝ (1 + z) −0.75 ) compared the quiescent systems, which as a population show dramatic growth from compact red nuggets at cosmic noon to the large ellipticals in today's Universe (Re ∝ (1 + z) −1.48 ). Of note is that the above characterizes the evolution in the size distribution of the population, not by itself the evolutionary tracks of individual galaxies. Connecting progenitor-descendant sequences based on their constant cumulative number density as outlined in Section 2.5, information from the evolving galaxy stellar mass function can be folded in together with the size measurements to infer that: (1) the progenitors of present-day Milky Way mass galaxies have evolved, on average, along individual growth tracks of ∆ log Re ∆ log M = 0.27 − 0.3 (i.e., an inside-out growth track slightly steeper than the slope of the star-forming size-mass relation at any epoch; ; and (2) the most massive galaxies have experienced much steeper size growth with individual tracks following ∆ log Re ∆ log M = 2, consistent with scenarios where an www.annualreviews.org • Star-Forming Galaxies at Cosmic Noon DARK MATTER HALO AND RELATED PROPERTIES r200: Virial radius of a DM halo, usually the radius within which the mean mass density is 200 times the critical density for closure of the Universe at the redshift of interest; also denoted Rvir (Mo et al. 1998). λ: Spin parameter of a DM halo (Bullock et al. 2001). M DM , J DM , j DM : Mass of a DM halo, and its total and specific angular momentum at the virial radius (with jDM = JDM/MDM). m d , j d : Mass and angular momentum of the baryonic disk galaxy expressed as fractions of the host DM halo mass and angular momentum (such that M bar = m d MDM, J bar = j d JDM). early dissipative core formation phase is followed by the build-up of profile wings through dissipationless, predominantly minor, mergers , Patel et al. 2013a. The formation of galactic disks is inherently linked to the DM halos that host them. In its simplest form, disk scalelengths are expected to scale with the virial radii of their host halos as: which boils down to a linear scaling with the virial radius r200 provided the accreting baryons retain the specific angular momentum of their host halo (j d /m d = 1; Mo et al. 1998). The width of the log-normal distribution in spin parameters λ obtained from N-body simulations in a ΛCDM cosmology (Bullock et al. 2001) is sufficient to account for the observed scatter in the size-mass relation. Such a scenario predicts an evolution in size at fixed halo mass following R ∝ H(z) −2/3 , in agreement with the observed evolution for latetype galaxies by van der Wel et al. (2014a), who note that a parameterization as a function of H(z) is marginally favored over that with the scale factor (1 + z). Adopting the stellar mass -halo mass (SMHM) relation inferred from abundance matching, the observed sizemass relation can be converted to a galaxy size -halo size relation (Kravtsov 2013, Huang et al. 2017, Somerville et al. 2018). Applied to observations at 0 < z < 3, such analyses reveal a linear relation between Re and r200 and hence evidence for homologous growth between galaxies and their host halos. At least at 0.5 < z < 3 the normalization for late-type galaxies is consistent with expectations from simple disk formation models (see also Section 4.4.4 for kinematic evidence of specific angular momentum retention in an ensemble-averaged sense). The effective radii of early-type galaxies on the other hand lie below the relation at all epochs. Mowla et al. (2019a) however suggest that expressed in R80 quiescent galaxies and SFGs occupy a single size-mass relation, with these outer size measurements exhibiting a close relationship to the host halos for the full population. Whereas observations and simulations agree on a general linear relation of the form R d = A r200, recent theoretical work has called into question whether the proportionality constant A, and hence the variation in galaxy size at fixed mass, is set by the halo spin parameter λ as in equation 1, halo concentration (Jiang et al. 2019), or a combination of both (Somerville et al. 2018). LBG: Lyman break galaxy, selected based on its characteristic rest-UV spectral break. Key in the above results is that they are based on mass-complete samples of galaxies. Individual sub-populations may differ in their growth rate. Allen et al. (2017) report a significantly faster size growth for Lyman break galaxies (LBGs; Re ∝ (1 + z) −1.2 ) than for the underlying full SFG population since z ∼ 7, a trend also seen in previous studies spanning a more modest redshift range, implying that LBGs represent a special subsample of highly star-forming and compact galaxies. Population differences aside, Ribeiro et al. (2016) report for the same sample of spectroscopically confirmed SFGs at 2 < z < 4.5 differences in size evolution at fixed mass ranging from Re ∝ (1 + z) −1.4 using conventional Sérsic profile fits to no size evolution at all over the considered 2 billion years leading up to cosmic noon when adopting a non-parametric measure of size quantified based on the pixel count above a threshold surface brightness. They attribute this to galaxies in their earliest phase of assembly being quite extended and irregular, and poorly described by a single Sérsic profile. An example at later epochs where alternative size definitions change trends in a qualitative manner includes work by who adopt curve-of-growth sizes with a posteriori PSF correction factors to conclude, at odds with van der Wel et al. (2014a), that there is no decline in number densities of compact quiescent galaxies since z < 1.5, thus placing more emphasis on progenitor bias than individual galaxy growth as an explanation of the observed size evolution of early-type galaxies. Cold Gas Content The cold gas reservoir of galaxies lies at the core of their evolution, fueling their star formation activity and SMBH growth, and efficiently mediating mass, angular momentum, and energy transfer. CO line or far-IR to ∼ 1 mm dust continuum observations have accumulated ample evidence that SFGs at cosmic noon have copious amounts of molecular gas (see reviews by Combes 2018, Tacconi et al. 2020. A recent focus has been on scaling relationships described in relation to the MS, facilitating the interpretation in the framework of galaxy evolution and providing well-calibrated recipes to estimate Mgas in the absence of actual cold ISM measurements (e.g., Genzel et al. 2015, Scoville et al. 2017. These analyses showed that over z ∼ 0−4 the depletion time τ depl = Mgas/SFR depends primarily on redshift and MS offset ∆MS = log(sSFR/sSFRMS(M , z)), and so does the ratio of molecular gas to stellar mass µgas with an additional dependence on M . In the updated derivation by Tacconi et al. (2020), unifying CO and dust continuum-based gas mass estimates including most recent NOEMA and ALMA data, and adopting the Speagle et al. Accordingly, the depletion time for MS SFGs at fixed M increases by a factor of ∼ 3 from z = 2 to the present day while the gas fraction fgas = Mgas/(M +Mgas) drops by a factor of ∼ 10 ( Figure 4). It also follows from these gas scalings, the near-linear MS and its evolution (from Speagle et al. 2014), and the size-mass relation for SFGs (from van der Wel et al. 2014a), that the gas mass surface density at fixed M evolves strongly over 0 < z < 2 as Σgas ∝ (1 + z) a with a ∼ 4, and more slowly at 2 < z < 4 with a ∼ 2. At all epochs, the average gas depletion time is nearly ten times shorter than the Hubble time, requiring sustained replenishment of the galactic cold gas reservoirs to maintain the SFG population as a whole on the tight observed MS. As summarized by Tacconi et al. (2020), this argument is a cornerstone of the "equilibrium growth" model, and favors that the bulk of SFGs are fed by smoother gas accretion modes via cold streams along the cosmic web and minor mergers rather than major mergers. At fixed M and z, the gas scaling relations imply that the enhanced SFRs well above the MS are driven by both higher gas fractions and higher star formation efficiencies (1/τ depl ), plausibly reflecting increased gas accretion and concentration as, e.g., in a major merger event. On the MS, the star formation efficiency is roughly constant but fgas decreases towards higher masses, along with the sSFR, suggesting that a lack of fuel (resulting from, e.g., suppressed accretion or gas removal) sets quenching on rather than reduced efficiency (from, e.g., gas stabilization against fragmentation by a massive bulge or ISM heating mechanisms). Setting tighter constraints on these scenarios through measurements of the cold ISM in sub-MS galaxies at z > 1 is very challenging, and the very few results published to date are inconclusive (e.g., Bezanson et al. 2019). Gas scaling relations at z > 1 are most firmly established at log(M /M ) ∼ > 10, where high-z samples probe well the SFG population and where the luminosity-to-gas mass calibrations are best constrained. The more extensive data sets now available do not support a significant dependence of the CO-H2 conversion on ∆MS (Tacconi et al. 2020). In contrast, there is a strong variation of CO-H2 and of the dust-to-gas ratio with metallicity (e.g., Genzel et al. 2012, Bolatto et al. 2013, which is folded in the scaling relations given above. At z ∼ > 0.5, the atomic gas contribution to Mgas on galactic scales is generally neglected (though a 36% correction for He is applied) since most of the hydrogen is expected to be in molecular form at the high densities inferred (> 10 M pc −2 ) and Damped Lyman α Absorbers studies indicate a slow evolution in HI gas density (∝ (1 + z) 0.57 ; Péroux & Howk 2020). Metallicity and ISM Conditions The metal content of galaxies is a sensitive probe of the baryon cycle, carrying the imprint of gas accretion, stellar nucleosynthesis, galactic winds, and internal gas mixing. Observational constraints for z ∼ 2 SFGs have largely come from strong rest-optical nebular emission lines, interpreted through empirical and/or theoretical calibrations in terms of the gasphase oxygen abundance (O/H). These lines also depend on the nebular conditions and structure, and on the excitation sources, affecting calibrations. The reviews by Maiolino & Mannucci (2019) and discuss in detail the strengths and limitations of various indicators, and stress the importance of combining multiple diagnostics, of adopting the same method(s) to reduce the impact of systematic differences in calibrations, and of using consistent approaches in deriving galaxy properties (M , SFR, ...) used to establish scaling relations. ne: Local electron density, the number of electrons per unit volume of an ionized nebula. Offsets in the location of (non-AGN) z ∼ 2 SFGs relative to the z ∼ 0 excitation sequences in line ratio diagrams have long been known (e.g., in [NII] . The growing near-IR spectroscopic data sets at z ∼ 2 have enabled a more systematic exploration of the origin of the observed offsets, providing evidence for evolving conditions of the ionized gas in terms of a harder ionizing radiation, elevated N/O abundance ratio, higher electron density and ISM pressure, and higher ionization parameter, at fixed O/H abundance (e.g., Masters et al. 2016, Strom et al. 2018. Other factors may be at play such as the presence of weak AGN activity, galactic-scale outflows and shocks, and diffuse ionized gas -the importance of which varies with redshift -as well as sample selection, and aperture and weighting effects where spectra of high-and low-z galaxies may encompass different physical regions and span a range of excitation (e.g., , Kaasinen et al. 2017, Sanders et al. 2017. Constraints on the electron density of ionized gas have also been obtained from the [OII] and [SII] doublet ratio, pointing to an increase with redshift, with ne in the range 100 − 400 cm −3 for z ∼ 2 SFGs compared to ∼ 25 cm −3 for z ∼ 0 galaxies (e.g., Sanders et al. 2016a, Kaasinen et al. 2017). These estimates may be somewhat inflated by emission from denser gas in the ubiquitous galactic winds at z ∼ 2 (Section 4.6) in the single-component line fits commonly performed. Turning to metallicity, while the "strong line" methods based on nebular rest-optical emission can lead to systematic differences in log(O/H) by up to ∼ 0.7 dex, relative estimates based on the same calibration are more accurate. The general shape and evolution of the mass-metallicity relation (MZR) agree qualitatively among various studies out to z ∼ 3.5, with lower metallicities at lower M , an overall decline in metallicity at earlier times, and a stronger evolution in the low-mass regime, in agreement with the (scarcer) results from rest-UV metallicity-sensitive features in young stars (see ). Among several proposed parametrizations, the form 12+log is physically motivated based on considerations of the chemical yields in the presence of inflows and outflows. It describes well the bending shape of the MZR up to z ∼ 2.5, where Z0 is the asymptotic value at high mass and M0 is the evolving turnover mass (with M0 ∝ (1 + z) β where β ∼ 2.6 − 2.9) below which the relation follows a power law of index γ ∼ 0.4−0.6 (e.g., Zahid et al. 2014, and references therein; see Figure 4). A secondary dependence on the SFR, ultimately tied to the gas fraction, is expected in a theoretical framework, where accretion of metal-poor gas dilutes the galactic gas-phase metallicity while increasing the gas reservoir fueling star formation. Based on the large set of SDSS local galaxy spectra and first results at high z, Mannucci et al. (2010) proposed a redshift-invariant fundamental metallicity relation (FMR) between log(O/H), M , and SFR, parametrized in terms of log(M ) − α log(SFR). While subsequent work at high z has led to mixed results possibly due to the limited dynamic range and uncertainties in SFRs, a consensus is now emerging for the detection of a FMR albeit with hints of a modest evolution with lower log(O/H) at fixed M and SFR to z ∼ 2.5, and possibly stronger evolution at z ∼ > 3 (e.g., Sanders et al. 2018. Such an evolution may reflect a progressive increase of the mass loading factor η of galactic winds (the ratio of mass outflow rate to SFR) and/or decrease of the metallicity of inflowing gas with lookback time; beyond z ∼ 3, infall rates of more pristine gas may overwhelm metal production through stellar nucleosynthesis, resulting in stronger dilution. Theoretical models and numerical simulations that match the observed MZR, FMR, and evolution thereof underscore the role of stellar feedback in the chemical evolution of galaxies, requiring an increasing η in lower-mass galaxies and winds removing gas at roughly the same rate as consumed by star formation around log(M /M ) ∼ 10 (e.g., Erb 2008, Lilly et al. 2013, Muratov et al. 2015, Davé et al. 2017). More direct observational constraints on η at z ∼ 2 will be discussed in Section 4.6.2. AGN Demographics The link between the growth of galaxies and their SMBHs, deduced from local scaling relations and the co-evolution in cosmic SFR and black hole accretion rate densities, has motivated an abundant literature on AGN activity and feedback across cosmic time (e.g., Fabian 2012, Heckman & Best 2014, Lutz 2014, Brandt & Alexander 2015, Padovani et al. 2017, for reviews). We summarize key aspects on the demographics of radiative-mode AGN at high z. L X,AGN : X-ray AGN luminosity, generally computed in the restframe hard 2−10 keV band and corrected for absorption. AGN, identified at X-ray and other wavelengths, are preferentially found in higher mass galaxies, which, for an underlying positive correlation between AGN luminosity and host mass, reflects flux limits in the data from which AGN are identified. Comparisons of the host properties of X-ray-selected AGN with those of mass-matched samples of inactive galaxies showed that AGN reside mainly in MS SFGs with little correlation between X-ray luminosity LX,AGN and SFR, are rarely associated with disturbed morphologies, but are more prevalent in hosts with denser stellar cores (e.g., Silverman et al. 2009, Kocevski et al. 2012, Mullaney et al. 2012a, Santini et al. 2012. The lack of correlation between LX,AGN and SFR is understood in terms of the short-term ∼ < 10 6 yr variability of AGN compared to the ∼ > 10 8 yr timescales of galactic star formation processes (e.g., Hickox et al. 2014). X-ray stacking analyses, effectively averaging over time, revealed a closer connection between inferred SMBH accretion rate and host SFR (e.g., Mullaney et al. 2012b). The ratio of average SMBH accretion rate to SFR appears to be largely independent of galaxy stellar mass, and so is the distribution of specific LX,AGN (often taken as a proxy for the Eddington ratio; e.g., Aird et al. 2012Aird et al. , 2018. While the distribution in specific LX,AGN shifts to higher values towards higher z, a mass-independent distribution at fixed z implies that a wider range of LX,AGN/M is probed at higher host mass. AGN selected by restoptical and mid-IR diagnostics are less prone to variability effects but susceptible to similar biases related to "dilution" by host galaxy emission (e.g., Padovani et al. 2017). A longer term connection between LX,AGN and SFR, coupled with evidence from morphologies, is consistent with a picture in which z ∼ 2 AGN are fueled by stochastic accretion, and secular processes (rather than major mergers) within the gas-rich hosts promote the growth of both SMBH and a central bulge (e.g., Mullaney et al. 2012b). The exception might be for the most luminous and most obscured mid-IR-selected AGN, underrepresented in X-ray surveys, whose morphologies are significantly more frequently disturbed or indicative of merging (Donley et al. 2018). Observations, as well as theoretical models and cosmological simulations (e.g., Somerville & Davé 2015, Naab & Ostriker 2017) support a link between AGN and star formation quenching at high masses. Causality, however, remains so far elusive. Empirical connections through galactic structure and outflows are discussed in Sections 4.1 and 4.6. RESOLVED PROPERTIES OF STAR-FORMING GALAXIES at z ∼ 2 Our understanding of the processes driving the evolution of the global galaxy properties discussed above has greatly benefitted from the growing amount of data resolving individual galaxies. A key finding was that high z SFGs are predominantly disks, albeit more turbulent than local spirals. The growth and evolution of disks as derived from stellar light, star formation, and kinematic tracers is first discussed (Sections 4.1 -4.3), followed by emerging dynamical constraints on the interplay between baryons and DM on galactic scales (Section 4.4) and deviations from disk rotation (Section 4.5). Non-gravitational motions (i.e., gas outflows) are then addressed, as direct probe of feedback in action (Section 4.6). 4.1. Star-Forming Galaxies as Axisymmetric Systems 4.1.1. Morphological disk settling and the emerging Hubble sequence. Many key features regarding the structural build-up of SFGs can be captured in a framework where we consider them as flattened, axisymmetric structures. This approach also fundamentally underpins semi-analytical models where any structural evolution is only described radially. Intrinsic 3D shapes inferred from projected axial ratio distributions illustrate how at any given epoch there is a tendency of increased fractions of SFGs with prolate (i.e., elongated) shapes in the low-mass regime, whereas the fraction of oblate (i.e., disky) systems increases with mass and toward later times (van der Wel et al. 2014b, Zhang et al. 2019). This downsizing pattern for morphological disk settling finds its counterpart in kinematic surveys, which show similar mass and redshift dependencies for orderly rotating disk fractions, with dispersion-dominated systems gaining in prevalence toward lower masses (see Section 4.3 and Figure 4). This Section discusses the radial characteristics of SFGs with an emphasis on relatively massive ( ∼ > 10 10 M ) systems for which the axisymmetric disk framework is most appropriate. In the next Section we discuss how or where the actual morphology deviates from axisymmetry. Salient features of the size-mass relation of SFGs were discussed in Section 3.4. The same HST surveys also shed light on surface brightness profile shapes, often quantified parametrically with a Sérsic model. For the MS population, exponential disk profiles of n ∼ 1 are the norm (Wuyts et al. 2011b), in line with the disk-like nature inferred from axial ratios and kinematics. Exceptions arise more frequently at the very tip of the MS, and among the rare population of starbursting outliers above the MS, which on average are characterized by more centrally concentrated profiles. We can conclude that the overall structure quantified from rest-optical/UV light and colors correlates with location in the M − SFR plane, with most star formation happening in disks while quiescent galaxies feature cuspier profiles. A Hubble sequence, where the dominant morphology and stellar populations are intimately tied, can thus be said to be in place already since at least z ∼ 2.5. While it is most straightforward to compare sizes and profile shapes across epochs at fixed mass, individual galaxies build up stellar mass over time through star formation and mergers. Applying the cumulative comoving number density technique outlined in Section 2.5, progenitor-descendant sequences have been reconstructed to reveal the growth in size and build-up of extended profile wings around central cores for galaxies at the most massive end and to recover the structural growth history of Milky Way progenitors , see also Patel et al. 2013b). The latter feature a more modest size growth and at least at 1 < z < 2.5 a more self-similar evolution in profile shape than the most massive galaxies which increase rapidly in Sérsic index. Other than by disentangling the population growth from growth of individual systems, major advances in our understanding of structural evolution are arising from comparing multiple tracers. Initially, this focused on rest-UV to rest-optical stellar emission, but increasingly this is complemented by resolved probes of ionized and molecular gas as well as reprocessed emission by dust. M/L: Mass-to-light ratio. Stellar mass distributions. With resolved imaging sampling the distribution of stellar light below and above the Balmer/4000Å break out to z ∼ 2.5 a picture has emerged in which negative color gradients (i.e., redder centers than outskirts) become increasingly prominent towards the high-mass end and at later times (e.g., Liu et al. 2017Liu et al. , 2018. The age-dust degeneracy in a space of mass-to-light (M/L) ratio vs. rest-optical color (e.g., Bell & de Jong 2001) allows for a relatively robust translation of the multi-band light maps to a stellar mass distribution , Szomoru et al. 2013, Tacchella et al. 2015, Wang et al. 2017, Suess et al. 2019, see Figure 5). In common to such studies is the finding of more compact and centrally concentrated stellar mass profiles compared to those observed in light, especially at lower redshifts and higher masses. Carrying out bulge-disk decompositions on stellar mass maps, Lang et al. (2014) find that while SFGs are well described by exponential disks at low masses, once crossing the Schechter mass they already contain 40-50% of their stars in a bulge component, even prior to their eventual quenching. Overall, taking both SFGs and quiescent galaxies together, it is now well established that measures of bulge prominence or central surface density (e.g., Σ 1kpc ; Cheung et al. 2012, Barro et al. 2017a, see Figure 4) form much more reliable predictors of quiescence than stellar mass by itself. However, the origin of this strong correlation, in particular its interpretation in terms of a causal connection, remains debated (Lilly & Carollo 2016, Abramson & Morishita 2018. Building on the increased prevalence of AGN with host central stellar mass density and the empirical inference that quenching sets on when the cumulative radiative energy of SMBHs reaches ∼ 4 × the halo binding energy, Chen et al. (2020) recently put forward a phenomenological model that strengthens the role of AGN in quenching by explaining naturally the structural differences between star-forming and quenched galaxies. 4.1.3. Observed Hα profiles. WFC3 grism surveys such as 3D-HST give access to the Hα surface brightness distributions on kpc scales for galaxies out to z ∼ 1.5. Such observations have illustrated that the Hα emission of MS galaxies, tracing the unobscured instantaneous star formation, follows on average exponential disk profiles , and that there is a resolved equivalent of the MS, a correlation between the local star formation and stellar surface density ; see also Wang et al. 2017). Deviations from this relation are seen in the centers of -particularly massive -SFGs, with also asymmetric features such as clumps contributing to the scatter (Section 4.2). Stacking Hα and H140 maps of 3200 z ∼ 1 SFGs Nelson et al. (2016b) find the (unobscured) star formation to be slightly more extended than the stellar continuum emission, with a weak dependence on mass: Re,Hα/Re,H = 1.1 M /10 10 M 0.054 . Translated to Hα equivalent widths (EWs) this results in centrally dipping profiles, with the central depression in Hα EW being most prominent at the high-mass end. AO-assisted IFU surveys were able to push resolved Hα EW measurements out to z ∼ 2.5, resulting in qualitatively similar findings (Tacchella et al. 2015). With such numbers at present limited to a few dozen (fewer when EW: Equivalent width, for an emission line equal to the ratio of line flux to continuum flux density. considering the high-mass end alone) and accumulated at a rate of ∼one 8-m telescope night per object, significant progress on number statistics here is anticipated from grism observing modes on JWST . Already with existing ground-based (yet seeing-limited) instrumentation, however, larger samples with consistent continuum and Hα size measurements over the full 0.6 < z < 2.6 range can be compiled. Doing so, Wilman et al. (2020) find an average size ratio of R e,Hα R e,F 160W = 1.26, without significant dependence on the redshift, mass and star formation activity. Adopting the observed size ratio as an upper limit to R e,SF R e,M (a limit due to the possible presence of differential extinction and dust gradients), they infer the associated size growth due to star formation alone to proceed along a vector of d log Re d log M ∼ 0.26, consistent with results from constant comoving number density arguments and only slightly steeper than the observed slope of the size-mass relation at any epoch. Other processes than simply adding new stars, such as feedback, angular momentum redistribution, (minor) mergers and the preferential quenching of more compact SFGs may need to be invoked to reconcile the relatively slow growth due to star formation with the observed size evolution of SFGs. Attenuation gradients. In the absence of dust all of the above radial profiles, size differences, and red centeredness would be attributed most straightforwardly in terms of stellar population age (or sSFR) gradients consistent with a picture of inside-out disk growth. SFGs at cosmic noon however are far from dust free, particularly in the massive and highly star-forming regime where most of the internal color dispersion (e.g., Boada et al. 2015) and radial gradients are seen. With only a single rest-optical color, the effects of age and dust are fully degenerate. While this enables a robust estimate of spatial M/L ratio variations, explaining the origin of these variations (spatially inhomogeneous SFH vs. extinction) is by the same token a challenging task. Several approaches have been pursued to pin down to which degree levels of extinction vary across galaxy disks. used resolved SED modeling of 7-band ACS+WFC3 photometry to constrain the stellar populations of individual pixel bins in 0.5 < z < 2.5 SFGs. Nelson et al. (2016a) were able to extract a more direct probe of extinction for z ∼ 1 SFGs in the form of the Balmer decrement (Hα/Hβ), although relying on stacked profiles for relatively broad bins in stellar mass. Complementary broadband approaches further include use of the dust-sensitive UV slope β (Tacchella et al. 2018) and a rest-frame U V I color-color diagram (Wang et al. 2017. While quantitative differences remain and direct comparisons are complicated by differences in applicable redshift range and technique (e.g., individual galaxies vs. stacking), a converging picture is emerging. Galaxies do feature radial gradients in extinction, with the amount of central enhancement increasing with stellar mass, reaching ∼ 2 magnitudes of central extinction at the high-mass end. Propagating this knowledge to the reconstruction of sSFR profiles yields on average surprisingly flat profiles over the full radial range for intermediate mass galaxies. Only among the most massive galaxies central drops in the star formation activity remain present after dust correction, a trend that is interpreted as a signature of inside-out quenching. For example, Tacchella et al. (2018) exploit near-IR AO-assisted IFU data at z ∼ 2 to find a radially constant mass-doubling timescale of ∼ 300 Myr for SFGs below ∼ < 10 11 M , and central star formation suppression by a factor of ∼ 10 above this mass. At z ∼ 1, Wang et al. (2017) report qualitatively similar results with flat sSFR profiles for SFGs below 10 10.5 M and central declines of 20-25% above this mass. (see also Liu et al. 2016). The flat inferred sSFR profiles of intermediate mass galaxies are seemingly at odds with the inside-out growth inferred from constant comoving number density arguments (Section 4.1.1). Possibly the stellar build-up proceeds more rapidly outside the inner ∼ 2 Re within which most stellar population and dust gradients have been quantified, but it has also been argued that the mass-weighted size growth may be more modest than the observed light-weighted one (Suess et al. 2019). Resolved U V J diagrams and direct measurements 4 of radial Balmer decrement profiles of individual galaxies will undoubtedly play a vital role in progressing our understanding of where within SFGs stars are formed, and are within reach of JWST's imaging and (grism) spectroscopic capabilities. That said, we caution that the central effective AV of ∼ 2 magnitudes inferred for massive SFGs under a foreground screen approximation may well conceal total dust column densities that are several times higher, depending on the dust geometry, its clumpiness and albedo (Seon & Draine 2016). We thus conclude that dust modeling at present poses a key challenge to quantifying galaxy sizes and SFR distributions at the massive end. 4.1.5. Compact dusty cores and implications for SFR profiles. Having highlighted the significant role of dust, it is important to underline the potential offered by far-IR to radio observations to complement our view of where star formation is happening (as seen reprocessed by dust) and where within the disks cold gas, the fuel for star formation, resides (as revealed by CO line emission). Here, ALMA, NOEMA and the JVLA with their recently enhanced sensitivities and long baselines are making major contributions. In low-J CO transitions, MS galaxies feature a similar extent as observed in the (rest-)optical. This appears to be the case both at z ∼ 1 ) and z ∼ 2 (Bolatto et al. 2015), although numbers in the higher redshift bin remain limited. A different picture is painted when considering continuum probes of star formation. Comprised predominantly of non-thermal synchrotron radiation from charged particles accelerated within supernova remnants, 1.4 GHz continuum emission serves as a dust-unbiased SFR tracer (Condon 1992). Using a uv-stacking algorithm to trace the 1.4 GHz continuum size evolution of ∼ 1000 MS SFGs with 10 10 − 10 11 M spanning 0 < z < 3, Lindroos et al. (2018) find the measured radio sizes to be typically a factor of two smaller than those measured in the restoptical. Likewise, focusing on thermal dust emission from a sample of normal MS galaxies at log(M /M ) ∼ > 11, Tadaki et al. (2017a) combined ALMA 870 µm observations in compact and extended configuration to infer that the dust sizes of their targets were more than a factor of two smaller than those observed at rest-optical (and even more so Hα) wavelengths. These results are in contrast to what naively would be anticipated given the typical centrally declining Hα EWs in the same high-mass regime at z ∼ 2, even after dust corrections. Star formation happening in such centrally concentrated cores could in several hundred Myr build up a central bulge with Σ 1kpc > 10 10 M kpc −2 , akin to the central densities of lower redshift quiescent systems. Resolved maps at a second, higher frequency IR wavelength are needed to rule out or reveal any negative gradients in dust temperature that may bias the inferred half-SFR sizes to low values. In the handful of objects where resolved CO and dust continuum measurements are both available, authors also noted the smaller dust compared to CO sizes (Spilker et al. 2015, Tadaki et al. 2017b). Rujopakarn et al. (2016) on the other hand found 5 cm and 1.3 mm sizes of somewhat lower mass ( log(M /M ) ∼ 10.7) SFGs at the same redshift to both be comparable to the extent of the stellar mass maps. Enhancing the robustness of multi-tracer structural measurements and interpreting the relative sizes of dust, stellar, and Hα emission as a function of mass stands as an important challenge for future studies. This applies especially to reconciling the apparently inconsistent findings from cold ISM and Hα observations at the massive end. At present, ambiguity remains whether this is due to uncertain dust corrections or instead differences between samples that fit into a common evolutionary sequence where massive galaxies undergo compaction events triggering nuclear starbursts (responsible for the compact dust sizes) followed by a phase of insideout quenching (responsible for the centrally declining Hα EWs; e.g., Tacchella et al. 2016). Deviations from Axisymmetry 4.2.1. Shapes and morphologies. Thus far, we discussed the structural properties of high-z SFGs in terms of sizes and radial profiles. Skewed axial ratio distributions for low-mass (log(M /M ) < 10) SFGs at cosmic noon suggest that a framework of flattened axisymmetric disks may be inappropriate for young systems that have not had the time to settle into an equilibrium disk configuration (Law et al. 2012b, van der Wel et al. 2014b. Modeling the joint distribution of projected axis ratios and sizes, and accounting for the finding that smaller SFGs are systematically rounder, Zhang et al. (2019) argue that prolate and/or spheroidal shapes may in fact be even more common than inferred by van der Wel et al. (2014b), also for the log(M /M ) = 10 − 10.5 regime. They report young, low-mass galaxies in the VELA set of high-resolution hydrodynamical cosmological zoom-in simulations to be prolate as well. Kinematics reveal a qualitatively similar trend, with a threshold mass for disk settling that decreases with decreasing redshift (Section 4.3). Even above 10 10 M , the morphological appearance of high-z SFGs often looks markedly different from that of the relatively smooth disk population in the local Universe. Rising fractions of irregular morphologies were first noted in early HST observations probing the rest-UV (Griffiths et al. 1994, Windhorst et al. 1995, Abraham et al. 1996, and later quantified using the larger samples provided by rest-optical legacy surveys such as CANDELS (e.g., (Hocking et al. 2018). Cross-comparisons often show a good concordance between these approaches. This does however not mean that the physical origin of the irregular morphologies, often featuring asymmetries in the form of off-center clumps, can be readily interpreted. While historically frequently used alongside pair counts to quantify the evolution in merger rates, the clumpy morphologies are nowadays more often interpreted as massive star-forming regions originating in marginally stable, gas-rich disks. The tightness of the MS, kinematic evidence for ordered rotation, average surface brightness profiles and axial ratio distributions as well as probes of the cold gas reservoirs all contributed to this paradigm shift. In addition, the wavelength dependence of clumpy morphologies (more prominent in the rest-UV where they are identified) attributed to spatial variations in the SFH and/or dust extinction also imply that the underlying mass distribution is smoother than the galaxies appear in light, unlike what may be expected from mergers (e.g., , Cibinel et al. 2015. Indeed, several studies have addressed the ability to identify mergers at cosmic noon by exploiting mock observations of galaxies extracted from simulated cosmological volumes where their (non-)merger state is intrinsically known (e.g., Snyder et al. 2015, Thompson et al. 2015. This exercise reveals dependencies of completeness and contamination fraction of the selected mergers on merger stage, viewing angle, and depth of observation, yielding in some of the simulations that reproduce realistically high gas fractions at z ∼ 2 results that are no better than a random guess (Abruzzo et al. 2018). This does not imply that mergers do not happen, nor that all clumps share the same formation process. Targeting mostly higher redshifts (2 < z < 6), Ribeiro et al. (2017) find the most massive clumps (∼ 10 9 M ) to typically reside in galaxies featuring just 2 clumps, arguably interpretable as a merger, whereas less massive clumps (< 10 9 M ) occur in galaxies featuring a larger number of them, consistent with disk fragmentation. The distinction between ex-situ and in-situ clumps, with the former featuring higher masses and older stellar ages, is also seen in hydrodynamical simulations (Mandelker et al. 2017). Clump properties. Turning to the properties of individual clumps, a first realization stemming from multi-band stellar population analyses of these features is that, while striking in appearance, they do not dominate the integrated UV emission of the galaxies, let alone add up to a major contribution of the star formation, and even less so account for a substantial fraction of the overall stellar mass. While a precise breakdown depends on details of sample selection, clump selection (e.g., threshold depth and wavelength), and whether and how underlying diffuse disk emission is accounted for (e.g., Förster , different censuses report clump contributions (i.e., summed over all clumps) to the overall UV emission, SFR, and stellar mass of mass-selected SFGs at cosmic noon of ∼ 20%, ∼ 5−18% and ∼ < 7%, respectively , Guo et al. 2015, Soto et al. 2017). The fraction of SFGs that appear clumpy is itself a function of both mass and redshift. While ∼ 60% of low-mass (log(M /M ) < 9.8) SFGs features clumpy UV morphologies over the full 0.5 < z < 3 range, the clumpy fraction for intermediate and high-mass SFGs drops from 55% to 40% and from 55% to 15% over the same z range, respectively (Guo et al. 2015). The characteristic scales of giant star-forming clumps reported in the literature are on the order of a kiloparsec, with corresponding stellar masses ranging up to a few 10 9 M (e.g., Förster ). These scales are in accordance with the Toomre scale and mass anticipated for gravitational instabilities within gas-rich turbulent disks (Elmegreen 2009, Dekel et al. 2009a. It is worth noting though that structures on these scales are only marginally resolved in studies of field galaxies, and may correspond to conglomerations of blended clumps of smaller physical scales. Samples of a handful of lensed galaxies reaching spatial resolutions of 20 − 100 pc do indeed reveal progressively smaller clump sizes as the resolution is enhanced with respect to blank field observations (Dessauges-Zavadsky et al. 2017, Rigby et al. 2017. This is illustrated perhaps most convincingly in the analysis of multiple lensed images of the same object at different magnifications (Cava et al. 2018). In this light, zoom-in simulations of turbulent gasrich disks resolving the multi-phase ISM on parsec scales will prove useful in tracing fragmentation below the Toomre scale and interpreting the higher resolution observations that will become feasible with JWST and ultimately the extremely large telescopes. Already, first attempts on lensed samples are made to characterize the clump mass functions (Dessauges-Zavadsky & Adamo 2018), yielding results consistent with a power-law slope of −2 anticipated for fragmentation due to a turbulent cascade (Chandar et al. 2014, Adamo et al. 2017. Typical stellar ages inferred for the star-forming clumps are on the order of 100−200 Myr (Förster , Guo et al. 2012. A single massive clump consisting almost entirely of line emission (i.e., massive in gas, but an order of magnitude lower in stellar mass) was discovered by Zanella et al. (2015), for which they estimate an age of < 10 Myr, confirming the in-situ formation by gravitational collapse as origin of the clump phenomenon. Mimicking the azimuthally averaged radial trends of stellar population tracers discussed in Section 4.1, clumps themselves also feature redder rest-optical colors, lower Hα EWs, and -inferred from those -older ages (by a few 100 Myr) and lower sSFRs towards the galaxy centers , Adamo et al. 2013, Guo et al. 2012, Soto et al. 2017. The gradients steepen with increasing stellar mass and decreasing redshift, and are found to be overall steeper than the radial gradients observed for the intra-clump regions . As a caveat we note that in most of these studies radial gradients are quantified on the basis of ensembles of clumps collected from multiple galaxies within relatively coarse bins of mass and redshift, as the number of detectable clumps in individual systems remains limited. The longevity of clumps forms an outstanding question with significant implications for the subsequent structural evolution of the galaxies that host them. If remaining intact and surviving internal stellar feedback for a few hundred Myr, their inward migration due to dynamical friction is predicted to be an efficient mode of in-situ bulge growth (e.g., Bournaud et al. 2007, Elmegreen et al. 2008, Ceverino et al. 2010). On the other hand, simulations with stronger feedback implementations such as FIRE (Oklopčić et al. 2017) and NIHAO (Buck et al. 2017) feature shorter clump lifetimes ( ∼ < 50 Myr) and substantially less inward migration. Despite their differences both flavors of simulations claim to reproduce the observed phenomenology of wavelength dependent clump prominence, their characteristic stellar ages and even radial gradients (e.g., Oklopčić et al. 2017, Mandelker et al. 2014. A duty cycle argument relating the existence of a very young clump as found by Zanella et al. (2015) to the abundance of equally massive clumps that are older supports long inferred clump lifetimes (∼ 500 Myr). Measured ages of the stellar populations in clumps may not necessarily match the timescale of clump survival as clumps are in constant interaction with their surrounding disk due to outflows, tidal stripping, and continued accretion (Bournaud et al. 2014). Perhaps the observable with most discriminating power between the different suites of simulations will prove to be the gas fraction, on an individual clump basis, but even already at the galaxy-integrated level. Star-Forming Galaxies as Rotating Turbulent Disks Near-IR IFU observations, mainly of Hα but also [OIII] or [OII] line emission, have provided the most comprehensive and detailed censuses of the kinematic properties of z ∼ 2 SFGs, and the most convincing evidence for the prevalence of disks among them. Mitigating M/L variations that can complicate the interpretation of morphologies, especially at z > 1, kinematics trace the full underlying mass distribution and are a sensitive probe of a system's dynamical state. Spatially-resolved kinematics of cold gas line emission from (sub)mm interferometry are still scarce for typical z ∼ 2 MS SFGs, and while near-IR slit spectra have also been exploited to derive emission line kinematic properties, they give spatially limited information with larger uncertainties related to slit placement relative to the galaxy center and kinematic axis. Stellar kinematics at z > 1 are still restricted to quiescent galaxies, absent of young hot stars filling in absorption features, and in all but a few cases are limited to galaxy-integrated velocity dispersions. The first step in exploiting 3D kinematic data is to identify the nature of the galaxies. Different procedures are followed but they conceptually rely on similar criteria based on KINEMATIC PROPERTIES Rotation curve (RC): Rotation velocity v vs. galactocentric radius r. For a "Freeman" thin disk with exponential surface density distribution, scale length R d , and y ≡ r/2R d , v 2 (r) = 4πGΣ0R d y 2 [I0(y)K0(y)− I1(y)K1(y)], where G is the gravitational constant, Σ0 is the central surface density, and Ii and Ki are the modified Bessel functions of order i. At fixed mass profile M (r), thick disks (scale height h ∼ 0.2 − 0.3 R d ) have a ∼ 8% lower v peak reached at ∼ 10% larger radius, while in the spherical approximation the peak is ∼ 15% lower and at ∼ 20% smaller radius (Freeman 1970. v rot : Maximum intrinsic rotation velocity (i.e., corrected for beam smearing and galaxy inclination when measured from observations), with Rmax denoting the radius where it is reached in intrinsic space. v 2.2 : Intrinsic rotation velocity at r = 2.2 R d , where a Freeman disk RC peaks (corresponding to 1.3 Re). For n = 1 profiles, v2.2 differs from the peak vrot. σ0: Local intrinsic velocity dispersion (i.e., corrected for beam smearing when derived from observations); it is assumed to be isotropic and constant across disks (Section 4.3.2). vc: Circular velocity, here as a measure of the potential well. For a thin disk, vc = vrot; for a thick disk with non-negligible turbulent pressure gradient, v 2 c (r) = v 2 rot (r) + 2 σ 2 0 (r/R d ) (e.g., ). S 0.5 : Alternative kinematic estimator for a spherically symmetric system in an isothermal potential, defined as S 2 0.5 = 0.5v 2 rot + σ 2 0 (e.g. Weiner et al. 2006a). M dyn : Enclosed dynamical mass. For a spherical distribution, M dyn (r) = r v 2 c /G; for a Freeman disk, . f , f bar , f DM : Ratio of stellar, baryonic, and DM mass to dynamical mass. j d : specific angular momentum of a (disk) galaxy, ∝ v(r) × r. 2D maps, and on the main derived parameters of maximum rotation velocity vrot and local velocity dispersion σ0 corrected as appropriate for spatial and spectral resolution and for galaxy inclination (extraction methods are summarized in the Supplemental Text). The basis is encapsulated in the following set of disk criteria adopted in several studies, motivated by expectations for an ideal rotating disk, and increasingly stringent and demanding of the data (e.g., Förster ): 1. a smooth monotonic velocity gradient across the galaxy, defining the kinematic axis; 2. a centrally peaked velocity dispersion distribution with maximum at the position of steepest velocity gradient, defining the kinematic center; 3. dominant rotational support, quantified by the vrot/σ0 ratio; 4. co-aligned morphological and kinematic major axes (a.k.a. kinematic misalignment); 5. spatial coincidence of the kinematic and morphological centers. Application of these criteria is usually done from measurements of the parameters and visual inspection, or through comparisons to disk models. Kinemetry, an approach based on harmonic expansion along ellipses of the moment maps of the line-of-sight velocity distribution, has also been used in some studies to quantify the degree of asymmetry in velocity and dispersion maps, either as main classification or in support of the criteria above. Details on disk modeling and kinemetry can be found in the Supplemental Text. It is increasingly common to supplement the kinematic criteria with information on galaxy morphology and possible companions, e.g., from HST imaging, for a more complete characterization. The outcome of the morpho-kinematic classification scheme depends on how well the galaxies are resolved and how sensitive the data are. It is usually adequate to provide a first-order description of the system and the basis for quantitative interpretation of the measurements. Deeper data detecting fainter extended emission and/or higher resolution (AO-assisted vs. seeing-limited) set better constraints on the nature of the galaxies and can reveal additional interesting features (Section 4.5). The choice of vrot/σ0 threshold varies from 1 to 3 between different studies, with the intermediate value of √ 3.36 corresponding to equal contribution from rotation and random motions to the dynamical support of a turbulent disk. Several efforts have been devoted to assess the reliability of kinematic classification based on mock observations of template data, encompassing nearby systems to high resolution cosmological simulations. Low misclassification fractions of ∼ 10% − 30% are generally obtained for disks and major mergers alike, with the range reflecting the specific criteria employed, and data resolution and S/N ,Épinat et al. 2010, Bellocchi et al. 2016). Using zoom-in simulations from the VELA suite of z ∼ 2 isolated galaxies and mergers over many sightlines to create ∼ 24000 mock-observed data sets in 0. 6 seeing, Simons et al. (2019) conclude that disks are identified with high confidence, while misclassification of mergers as disks varies widely but, unsurprisingly, is lowest ( ∼ < 20%) when applying all critera above and folding in HST -like morphological information. 4.3.1. Disk fractions. Recent large kinematic surveys have confirmed the findings from earlier smaller samples that up to z ∼ 2.5 a large proportion of massive SFGs are fairly regular disks, albeit with higher velocity dispersions than present-day spirals. The largest and most complete surveys, comprising hundreds of SFGs on/around the MS at 9 ∼ < log(M /M ) ∼ < 11.5 with resolved kinematics from KMOS, find ∼ 70% − 80% of rotation-dominated galaxies (i.e., satisfying criteria 1-3 above, with vrot/σ0 > 1; , a result borne out by deep AO-assisted SINFONI data of 35 z ∼ 1.5 − 2.5 SFGs in the same mass range . Imposing all criteria reduces the disk fractions f disk to ∼ 50% − 60%. Significant trends in the kinematic mix of SFGs are emerging from z ∼ > 0.6 IFU surveys, with lower f disk at earlier epochs and, at fixed z, towards lower masses (e.g., ). These results strengthen and extend out to z ∼ 3.5 findings from optical and near-IR slit f disk : Fraction of galaxies classified as disks. spectroscopy over z ∼ 0.2 − 2.5 (e.g., Kassin et al. 2012, Simons et al. 2017. The dependence on M and z of the fraction of rotation-dominated galaxies is illustrated in Figure 4 (where the curves are adjusted to match the binned data presented by Simons et al. 2017. 5 The trends reflect primarily those with vrot/σ0, with the evolution of σ0 largely driving the z variation and the connection between vrot and galaxy mass (via the Tully-Fisher relation) dominating the M dependence (see The variation of disk fraction and vrot/σ0 with galaxy mass and redshift has been interpreted in a "disk settling" scenario (Kassin et al. 2012). Massive SFGs settled earlier into more rotationally-dominated "mature" disks, gradually followed by lower-mass galaxies at later times and with more massive disks being dynamically colder at all epochs. This evolution is reflected in the trends between mass, morphology, and specific angular momentum of disks (discussed in Section 4.4.4). It also finds its counterpart in the structure of the stellar component from HST imaging inferred from the projected axial ratio distributions (Section 4.2), and is qualitatively reproduced by the recent high-resolution TNG50 cosmological simulation (Pillepich et al. 2019). Sections 4.3.2 and 4.4 focus on the properties of disks identified as described above and interpreted in an ideal disk framework, Section 4.5 comments on deviations thereof. The mass dependence of the disk fraction implies that disk samples preferentially probe, on average, higher mass SFGs compared to mass-selected samples. Disk turbulence. The elevated gas velocity dispersion of z ∼ 2 disks is well established and implies that they are geometrically thick, 6 as observed in HST imaging (e.g., Elmegreen & Elmegreen 2005. At the level of beam smearing of high-z observations (∼ 4 − 5 kpc in natural seeing, and ∼ 1 − 2 kpc using AO), unresolved noncircular motions induced by deviations from axisymmetry of the gravitational potential (e.g., massive clumps, bars) or related to outflows may contribute to the measured σ0 along with local turbulent gas motions. The agreement in σ0 between no-AO and AO data sets (after beam smearing corrections) suggests that potential noncircular motions on ∼ > 1 kpc scales have little impact on the measurements. For simplicity, σ0 is usually referred to as "turbulence." Typical dispersions measured in ionized gas are ∼ 45 km s −1 at z ∼ 2, compared to ∼ 25 km s −1 at z ∼ 0, varying as σ0 ≈ 23 + 10z km s −1 ; cold atomic and molecular gas measurements at z > 0 are scarcer but follow a similar evolution albeit with ∼ 10−15 km s −1 lower dispersions (Übler et al. 2019, and references therein). The σ0 evolution is consistent with that of the galactic gas mass fractions in the framework of marginally-stable Q ∼ 1 gas-rich disks in which vrot/σ0 ∝ f −1 gas (e.g., Genzel et al. 2011, Johnson et al. 2018. At fixed redshift, the scatter in σ0 is substantial and there is evidence that an important part of it is intrinsic to the galaxy population, but only a weak or no trend is found between σ0 and global galaxy parameters such as M , SFR, fgas, mass and SFR surface densities, or inclination (e.g., Jones et al. 2010a, Johnson et al. 2018,Übler et al. 2019). Reasons could include limited ranges and uncertainties in properties in a given z slice, complex dependence of σ0 on more than one parameter, or possible accretion-driven variations on short ∼ < 100 Myr timescales as recently proposed by Hung et al. (2019) based on FIRE high-resolution numerical simulations. In high S/N AO data of well resolved disks, no convincing trend on spatially-resolved ∼ 1 − 2 kpc scales has been seen either between σ0, ΣSFR, or even galactocentric radius in the outer disk parts (away from where beam smearing corrections become large and more uncertain; Förster Schreiber et al. ,Übler et al. 2019. Given the lack of clear variations, the disk dispersions are thus taken as isotropic and radially constant. Constraining the physical driver(s) of the gas turbulence at high z thus still proves difficult. This supersonic turbulence would rapidly decay within a crossing time (∼ 10 7 yr) if not continuously powered, and gas accretion from the cosmic web, disk instabilities, and stellar feedback have been proposed as energy sources (see, e.g., summaries by Krumholz et al. 2018Krumholz et al. ,Übler et al. 2019. Theoretical models and numerical simulations make different predictions as to the generated amount of gas turbulence (e.g., Aumer et al. 2010, Hopkins et al. 2012, Gatto et al. 2015, Goldbaum et al. 2015. The impact of stellar feedback varies a lot depending on the inclusion/treatment of radiation pressure and the location where feedback is injected into the ISM, although a general conclusion is that it can maintain galaxy-wide turbulence of ∼ 10−20 km s −1 (and is necessary to reproduce various other galaxy properties and scaling relations). In contrast, gravitational processes including gas transport and instabilities within the disks appear to more easily match the observed range of σ0 under the conditions prevailing at higher redshifts. Plausibly, both forms of drivers are present as in the unified model of Krumholz et al. (2018), with gravity-driving dominating at earlier cosmic times and a gradual transition to feedback-driving at later times. Further insights will benefit from more direct estimates of cold gas masses in individual galaxies, and maps of the gas, SFR, and kinematics at high spatial and velocity resolution. Mass and Angular Momentum Budget Constraints from resolved kinematics have been used to investigate the mass budget and angular momentum of high z SFGs. At z ∼ 2, it is important to account for the significant contribution of gas to the baryonic component, and of the random motions to the dynamical support In the turbulent disk framework, the circular velocity vc (as a measure of the potential well) at radius r can be computed through v 2 c = v 2 rot +2σ 2 0 (r/R d ). Corrections can be applied for deviations from n ≈ 1 profiles (e.g., when a massive bulge is present), and for disk "truncation" appreciably reducing Re/R d in strongly dispersion-dominated cases vrot/σ0 ∼ < 2 ). The enclosed dynamical mass can be estimated, for instance at Re, through M dyn = Re v 2 c / G, where G is the gravitational constant. This expression is for the spherical approximation; for an infinitely thin Freeman disk the values would scale down by ×0.8. Other methods to derive dynamical masses have been used, including a two-pronged approach applying the rotating disk estimator neglecting pressure support for rotation-dominated disks (i.e., taking vc = vrot) and through the virial mass estimator with the integrated dispersion for dispersion-dominated sources (M dyn = αReσ 2 /G with α in the range ∼ 3 − 5 typically adopted). Forward modeling accounting for disk thickness and turbulence, and fitting simultaneously the velocity and dispersion profiles, incorporates self-consistently all relevant effects though comparisons with simpler approaches as outlined above indicate overall agreement within ∼ 0.2 dex (e.g., Förster ). Dynamical vs. baryonic mass estimates. In the most straightforward approach to constraining the mass budget, global dynamical mass estimates are compared to stellar and gas mass estimates. Studies based on near-IR IFU or slit spectroscopy data generally concur on overall elevated baryonic mass fractions f bar = (M + Mgas) /M dyn , with large scatter, among z ∼ 2 SFGs (e.g., Förster . Modeling deep Hα kinematic data over a wide M range across z ∼ 0.7 − 2.7 from the KMOS 3D survey in legacy fields providing detailed constraints on galaxy stellar and size properties, Wuyts et al. (2016b) found a large rise in f bar derived within the central 1 Re regions from ∼ 45% at z ∼ 0.9 to ∼ 90% at ∼ 2.3 and a modest increase in stellar mass fraction f = M /M dyn from ∼ 30% to ∼ 40%, reflecting the fgas evolution. The scatter at fixed z is driven by positive correlations with average stellar and gas mass surface densities at < Re. These trends hold when accounting for mass incompleteness or considering only progenitors of z = 0 log(M /M ) ≥ 10.7 galaxies, and are fairly robust to SED modeling assumptions or gas scaling relations among plausible choices. At z ∼ 2, the M dyn -based results thus leave little room for DM mass contribution (fDM) within the ∼ 1 − 2 Re probed by the observations. Noting that the analyses above are for a Chabrier IMF, more bottomheavy galaxy-wide IMFs such as a Salpeter slope down to 0.1 M would also be disfavored. 4.4.2. Tully-Fisher relation. The Tully-Fisher relation (TFR) relates measures of galaxy mass to the full potential well; it is thus sensitive to the galactic baryonic content and can place powerful constraints on cosmological disk formation models (e.g., Mo et al. 1998, Dutton et al. 2007, Somerville et al. 2008, McGaugh 2012, among many others). Kinematic studies agree on the existence of a TFR out to z ∼ 3 and on the reduced scatter about the relation when accounting for pressure support in the turbulent high z disks but with mixed outcome as to the evolution, ranging from none over z ∼ 0 − 1 (e.g., Kassin et al. 2007, Miller et al. 2012, Tiley et al. 2019a) to significant in the sense of lower disk mass at fixed velocity for z ∼ 0.6 − 3.5 samples (e.g., ,Übler et al. 2017). The conclusions hinge on several interrelated factors including the adopted form and parametrization of the relation, galaxy sample properties, and choice of reference z ∼ 0 TFR (Übler et al. 2017(Übler et al. , Tiley et al. 2019a). The range in galaxy parameters spanned by the high z data sets generally hampers reliable fits to the slope of the relation, such that the evolution is usually quantified in terms of the zero-point (ZP) obtained assuming a non-evolving slope. The magnitude of the ZP offsets also depends on whether the relation is constructed from the stellar or the baryonic mass, and from vrot, v2.2, vc, or S0.5. Exploiting the wide 0.7 < z < 2.7 baseline from KMOS 3D , the study ofÜbler et al. (2017) provided the most self-consistent constraints across cosmic noon based on Hα kinematics from IFU observations, identical analysis method, selection through uniform data quality, galaxy parameters, and stringent disk criteria, with resulting log(M /M ) > 10 subsamples well matched in M and location around the MS and mass-size relations. Focussing on (fixed-slope) relations in terms of vc, the stellar TFR shows no significant ZP evolution from z ∼ 2.3 to ∼ 0.9 while the baryonic TFR ZP exhibits a negative evolution (lower M bar at fixed vc), and both relations imply similar positive evolution since z ∼ 0.9 compared to published z ∼ 0 TFRs. In the latter redshift interval, Tiley et al. (2019a) found instead little, if any, evolution in terms of M − v2.2 using matched data quality, methods, and samples over log(M /M ) ∼ 9 − 11 from the KROSS and local SAMI IFU surveys of Hα. The persisting discrepancies around z ∼ 1 underscore the importance of disentangling observationally-and physically-driven effects in order to establish firmly the evolution and explore the residuals of the TFR across all of 0 < z < 3. Outer disk rotation curves. Constraining the mass distribution from the shape of the rotation curve (RC) alleviates the uncertainties of global M/L conversions for the baryonic components. This approach is challenging at z ∼ 2 with current instrumentation, as tracing emission line kinematics beyond ∼ 1 − 2 Re requires very long integrations. Recent results from very sensitive Hα IFU data of a handful of massive z ∼ 1 − 2.5 star-forming disks extending to r ∼ 10 − 20 kpc showed significant and symmetric drops in the individual RCs beyond their peak . Similar falloffs in stacked Hα RCs reaching ∼ 3.5 − 4 Re constructed from high quality IFU data of ∼ 100 typical Example kinematic modeling of a massive z=1.4 SFG with sensitive Hα and CO 3-2 observations, a bulge-to-total mass ratio of ∼ 0.25, and large vrot/σ 0 ∼ 10 (fromÜbler et al. 2018). Left: RC in observed and intrinsic space. The observed, folded Hα and CO velocity curve (grey squares) extends to 18 kpc. The best-fit model curve of the circular velocity (vc) in intrinsic space is plotted as blue line (with blue shading showing the 1σ uncertainties of the inclination correction). The other lines show, successively, the effects of pressure support (i.e., the vrot curve; cyan line) that are minimal in this galaxy, the effects of inclination (vrot × sin(i); yellow line), and the resulting beam-smeared velocity curve in observed space (red line). Right: The relative contribution to the model vc in intrinsic space (blue line) from the baryons and from the DM halo (green and purple lines, respectively). Baryons strongly dominate within the half-light radius while DM starts to dominate the mass budget beyond ≈ 12 kpc or ≈ 3 Re (vertical solid and dashed lines). log(M /M ) ∼ > 10 star-forming disks suggested that this behaviour may be widespread at high z, and on average more pronounced towards higher z and lower vrot/σ0 disks (Lang et al. 2017). The outer slopes for these samples are nearly Keplerian, and in stark contrast to the flat or rising RCs of local spirals. The falloffs can be naturally explained as the imprint of baryons strongly dominating the mass over the regions probed by the kinematics together with sizeable levels of pressure support maintained well past the RC peak. The more detailed constraints from the individual extended RCs and dispersion profiles, simultaneously fit with turbulent disk + bulge + DM halo models, yield fDM(Re) ∼ < 20% with the 3/6 galaxies at z > 2 having the lowest fractions. In turn, the stacked RC is best matched by models with a high fraction of total bayonic disk mass to DM halo mass m d ∼ 0.05, in line with analysis of the angular momenta of a larger sample of z ∼ 0.8 − 2.6 SFGs ) and consistent with abundance matching results once accounting for the high fgas at high z (M /M halo ∼ 0.02; Moster et al. 2013, Behroozi et al. 2013a. Although the exact numbers depend on details of the distribution of the mass components, the implications of low central DM fractions and an overall high disk to DM halo mass ratio were shown to be fairly robust to the assumptions within plausible ranges. These findings spurred a number of follow-up studies, reporting mixed results. For instance, Tiley et al. (2019b) concluded that the averaged Hα outer RCs at z ∼ 0.9 − 2.5 are flat or rising, in contrast to Lang et al. (2017). As noted by both groups, stacking methodology matters. Re-scaling the data according to the observed radius Rmax and velocity at estimates. Median f DM (Re) from modeling the inner regions kinematics of larger SFG samples at z ∼ 2.3 (red square) and z ∼ 0.9 (blue square) from Wuyts et al. (2016b) are overplotted, as well as results from quiescent galaxies at z ∼ 1.7 based on stellar velocity dispersions presented by Mendel et al. (2020). Approximate areas occupied by z = 0 massive early-type and late-type galaxies (ETGs, LTGs; from Martinsson et al. 2013) and the Milky Way (Bland-Hawthorn & Gerhard 2016) are indicated with blue shading and the cross-hair symbol. the RC peak as favored by Lang et al. is more sensitive to the relative concentration of baryons vs. DM and, although it relies on detecting a change of slope in the inner velocity gradient, possible biases against recovery of flat or rising RCs were shown to be unlikely. Normalizing instead with the radius and velocity in observed space corresponding to 3 R d based on n = 1 fits to the morphologies as favored by Tiley et al. probes the baryonic to DM content on more global scales but any spread in Rmax/R d from a range in Sérsic indices would smear the peak in the composite RC. Importantly, the stacked samples are different, with the Lang et al. stricter disk selection resulting in higher mass and vrot/σ0 ranges compared to Tiley et al.. Comparisons are therefore not straightforward but given the large variations in fDM with radius, the conclusion of Tiley et al. (2019b) that within 6 R d ≈ 3.6 Re high z SFGs are DM-dominated is not necessarily incompatible with them being strongly baryon-dominated within 1 Re, even when displaying a flat outer RC in high vrot/σ0 cases (Figure 6 and, e.g.,Übler et al. 2018). On-going extensions to several tens of z ∼ 0.7 − 2.7 disks with high quality individual kinematics data are revealing ever more clearly a dependence with galaxy mass, redshift, and measures of central baryonic mass concentration , which were apparent in some previous outer RC studies. These trends echo the findings from the global mass budget (Section 4.4.1), account for the strong baryon dominance to r ∼ 8 kpc reported by for 10 compact massive SFGs 2 < z < 2.5 from the declining composite RC inferred from integrated Hα line widths, and explain the range of conclusions from different outer RC samples. Angular momentum. The connection between z ∼ 2 SFGs and their host DM halos has been further explored via measurements of the specific angular momentum j d ∝ vrot × Re. The inferred halo scale angular momenta are broadly consistent in mean and scatter with the theoretically predicted lognormal distribution of halo spin parameters λ, and the j d estimates scale approximately as ∝ M 2/3 similar to the theoretical jDM ∝ M 2/3 DM (e.g., ). The long made assumption that on average j d /jDM ∼ 1, expected if disks retain most of the specific angular momentum acquired by tidal torques in their early formation phases and shown to hold for local spirals (e.g., Fall & Romanowsky 2013), thus appears to be borne out by observations up to z ∼ 2.5. Even in a population-wide sense, this finding is not trivial given that (i) infalling baryons can lose and gain angular momentum from the virial to the disk scale (e.g., Danovich et al. 2015), (ii) angular momentum can be efficiently redistributed in and out of galaxies (e.g., Dekel et al. 2009a,Übler et al. 2014, Bournaud 2016, and (iii) < 15% of the cosmically available baryons are incorporated into the stellar component of galaxies (m d, ≈ 0.02; e.g., Moster et al. 2013, Behroozi et al. 2013a) and at most 30% when including gas at z ∼ 2 . Although simulations and semi-analytical models are now able to produce realistic distributions of galaxy size, specific angular momentum, and stellar-to-halo mass ratios, there is no consensus yet on how various mechanisms interact to preserve net angular momentum (e.g., Genel et al. 2015, Zavala et al. 2016, Jiang et al. 2019. The observed scatter in specific angular momenta has an intrinsic component at all epochs. The low j d tail encompasses massive early-type spirals and ellipticals at z ∼ 0, and more dispersion-dominated (and unstable) as well as more centrally concentrated star-forming disks in the high z samples. These correlations reflect an underlying massspin-morphology relation that likely underpins the Hubble sequence (e.g., Obreschkow & Glazebrook 2014) and may suggest that "disk settling" with cosmic time (see Section 4.3) is driven at least in part by angular momentum evolution (e.g., Swinbank et al. 2017). Noting that central mass concentration increases with galaxy mass and thus "disk maturity" (see Sections 4.1 and 4.3), found in their z ∼ 0.7 − 2.6 sample of starforming disks a significantly weaker anticorrelation between λ×(j d /jDM) and central stellar surface density Σ 1kpc than with the galaxy-averaged Σ and Σgas -a result suggesting that accumulation of (low angular momentum) material in the galaxy centers may be decoupled from the processes that set the global disk scale and angular momentum. Interpreting the mass and angular momentum budget. A consistent picture appears to be emerging in which log(M /M ) ∼ > 10 star-forming disks at z ∼ 2 are typically baryon-rich on the physical scales probed by the data, with lower DM mass contribution at < 1Re among more massive, centrally denser, and higher z galaxies. These trends are qualitatively reproduced by matched populations (in M , SFR, Re) in recent cosmological numerical simulations (e.g., Wuyts et al. 2016b, Lovell et al. 2018annualreviews.org • Star-Forming Galaxies at Cosmic Noon Teklu et al. 2018). While the role of the evolving gas content can be easily understood, the trends in mass fractions and ZP of the TFRs point to differences in the relative distribution of baryons vs. DM on galactic scales among SFGs of comparable masses at different cosmic epochs. These have been ascribed to a combination of (i) disk size growth at fixed mass, where the baryons at lower z extend further into the surrounding DM halo, (ii) evolving DM halo profiles, with shallower inner profiles at earlier times (e.g., less concentrated, or more cored; Martizzi et al. 2012, Dutton & Macciò 2014, and (iii) efficient dissipative processes in the gas-rich environments at higher z concentrating baryons in the central regions (e.g., Dekel & Burkert 2014, Zolotov et al. 2015. The weaker coupling between λ × (j d /jDM) and Σ 1kpc vs. Σ and Σgas ) would naturally result from inward radial gas transport through the latter processes. The kinematically inferred low fDM(Re) of massive z ∼ 2 star-forming disks overlaps with the range for z ∼ 0 massive early-type galaxies -their likely descendants. This echoes evolutionary links based, for instance, on the stellar sizes and central mass densities (Section 3.4 and 4.1), and on the fossil record (e.g., Cappellari 2016). Current z ∼ 2 results are summarized in Figure 7 (following Genzel et al. 2017Genzel et al. ,Übler et al. 2018 incorporating an expanded sample of individually modeled RCs . The inverse dependence of fDM(Re) with galaxy mass (and mass concentration) is reminiscent of the trends observed in local disks captured by the unified picture of Courteau & Dutton (2015). In this picture, the outward moving transition from baryon-dominated center to DM-dominated outskirts (relative to a fiducial 2.2 R d ≈ 1.3 Re for n ∼ 1) in more massive systems can be tied to the disk size -circular velocity -stellar mass scaling relations, with scatter in fDM attributed at least in part to size variations at fixed vc. The differentiation in fDM(Re) at fixed mass seen between local early-and late-type galaxies (e.g., Courteau & Dutton 2015) also appears to be present at cosmic noon (e.g., Mendel et al. 2020), which plausibly is rooted in the same processes that lead to the distinction between SFGs and quiescent galaxies in their stellar structural properties (e.g., Lang et al. 2014, van der Wel et al. 2014a. By necessity, the kinematics of z ∼ 2 star-forming disks are interpreted in a simplified axisymmetric framework with circular orbital motions. Observations of local disks indicate frequent deviations from this simple assumption caused, for instance, by interactions, warps and other such dynamical instabilities, and radial motions, which are difficult to constrain at the typical resolution and S/N of high-z data. Signatures of the latter are discussed in the next Section. Bending instabilities, such as warping or buckling, may be expected to be suppressed or short-lived in gas-rich turbulent disks (see the discussion by Genzel et al. 2017). Minor interaction-induced perturbations may not be ruled out but the exclusion of galaxies with potential companions wherever possible should reduce their role in disk samples. The validity of the disk framework for low-mass objects may be called into question in light of the increasing prevalence of prolate and/or triaxial systems towards lower masses and higher z suggested by statistical studies of the morphological axial ratios (Section 4.2), although this may be a lesser concern when applying the morpho-kinematic disk criteria (notably the requirement of kinematic and morphological major axes alignment; e.g., Franx et al. 1991). Furthermore, the generally small and spatially flat residuals in velocity and dispersion maps compared to axisymmetric disk models (resulting from the disk selection criteria employed in most studies) suggest that the potential impact of minor merger perturbations and prolateness/triaxiality is small in the kinematic analyses. Cosmological simulations are useful to assess the validity of assumptions made in interpreting data under more realistic high redshift environments. For instance, Wellons et al. (2020) quantified the effects of pressure gradients, noncircular motions, and asphericity in the gravitational potential on the rotation velocity and M dyn estimates in high-resolution FIRE numerical simulations of a range of massive turbulent disks at 1 ∼ < z ∼ < 3 based on the mass particle distributions, finding that pressure support usually makes the largest impact and that when it is accounted for, kinematically-derived mass profiles agree with the true enclosed mass within ∼ 10% typically over the r ∼ < 10 − 20 kpc range explored. Realistically replicating observables and empirical methodologies from simulations is not straightforward and subject to various limitations (numerical resolution, sub-grid recipes, radiative transfer, ...) but the informative potential is motivating a growing number of investigations to improve on both the simulation ingredients and data interpretation. Deviations from Disk Rotation In kinematics data of z ∼ 2 SFGs, modest deviations from regular patterns are seen in a subset of galaxies otherwise consistent with global disk rotation. Interpreting such kinematic asymmetries is not trivial in high z data but can plausibly be ascribed to internallyor externally-induced in-disk inflows, or to outflows. As will be discussed further in the next Section, the emission associated with the latter has a broad velocity distribution but low amplitude, and should have a negligible effect on the single-component line profile fitting that is usually performed in extracting 2D kinematic maps, unless the outflow is particularly strong (Förster Schreiber et al. 2018) 7 . The gas-rich environments prevailing at z ∼ 2 are expected to naturally promote perturbations in the marginally-stable Q ∼ 1 gas-rich disks, with fragmentation and efficient transport of material towards the center via, e.g., inward gas streaming and clump migration, while the gas reservoirs of galaxies are continuously replenished by anisotropic accretion via streams and minor mergers. Material streaming inwards can induce twists in the isovelocity contours and off-center peaks in the dispersion map at the level of a few tens of km s −1 (v rad ∼ 2 × σ0 × (σ0/vrot)), and differences in magnitude and orientation of the angular momentum between inner and outer regions expected to remain even after bulge growth slows (e.g., van der Kruit & Allen 1978, Cappellari 2016). Characteristic signatures thereof are indeed identified in some of the z ∼ 2 galaxies with high S/N, high resolution AO-assisted observations, along with morphologically identified bar-and spiral-like features in some cases (e.g., Genzel et al. 2006, Law et al. 2012a. These processes may be important in bulge and SMBH buildup, and concurrent disk growth through angular momentum redistribution (e.g., Bournaud et al. 2014, Dekel & Burkert 2014, Zolotov et al. 2015. The ubiquity of dense stellar cores and large nuclear concentrations of cold gas in massive z ∼ 2 SFGs, and the weak correlation between disk-scale angular momentum with Σ 1kpc call for further sensitive and high resolution kinematics data to more directly assess the role of radial gas transport at cosmic noon vs. alternative scenarios such as inside-out galaxy growth , Lilly & Carollo 2016. Strong kinematic distortions are generally interpreted as indicative of major merging. Assuming very simplistically that all SFGs not identified as rotation-dominated disks according to the classification scheme of Section 4.3 are major mergers, the fractions thereof would be ∼ 25% − 40% at z ∼ 1 − 2.5 and log(M /M ) ∼ > 10.5 (depending on the exact set of criteria and z; . These fractions are comparable to those inferred from morphologies and close pair statistics in a similar mass range (e.g., Conselice 2014, López-Sanjuan et al. 2013, Rodrigues et al. 2018, and consistent with cosmological simulations (e.g., Genel et al. 2008, Snyder et al. 2017. Taking the major merger fraction as 1 − f disk is obviously an oversimplification. Shallower data are more biased towards high surface brightness regions that may partly and unevenly sample the full system and result in apparently disturbed kinematics, exacerbated for clumpy morphologies (see Fig. 9 of Förster . A poorly resolved, low vrot/σ0 object does not necessarily imply it is a major merger ). More face-on disks may also be more difficult to identify because of the resulting small projected velocity gradient, reduced central dispersion peak, and possible clumps biasing the determination of morphological position angle and center (Wuyts et al. 2016b). As is the case for morphologies, kinematic signatures of interactions depend strongly on the system's orbital configuration, the properties and mass ratio of the progenitor galaxies, the sightline, and the merger stage (e.g., Bellocchi et al. 2016, Simons et al. 2019, introducing uncertainties in identifying major mergers. Despite these uncertainties, the kinematic mix among log(M /M ) ∼ > 10 SFGs at z ∼ 2 suggests a dominant timescale in a disk configuration, consistent with several other lines of evidence from scaling relations of galaxy properties pointing to the importance of processes other than major merger events in building up stellar mass and structure. Galactic-scale Outflows Galactic winds are thought to play a critical role in the evolution of galaxies by regulating their mass build-up, size growth, star formation, and chemical enrichment, by redistributing angular momentum, and by mediating the relationship between SMBHs and their host galaxies. Stellar feedback at low galaxy masses expels gas from the shallow potential wells, reducing the reservoirs fueling star formation and keeping a low galactic metal content (e.g., Dekel & Silk 1986, Davé et al. 2017. Above the Schechter mass log(M /M ) ∼ 10.7 (or log(M halo /M ) ∼ > 12), accreting SMBHs are thought to be important in suppressing star formation through ejective "QSO mode" feedback driving powerful winds during high Eddington ratio phases that sweep gas out of the host galaxy, and subsequent preventive "radio mode" feedback maintaining galaxies quenched by depositing kinetic energy into the halo that inhibits cooling alongside virial shocks (see review by Fabian 2012). IS: Interstellar. Galactic winds should be particularly effective at the peak epoch of star formation and SMBH accretion rates. The most easily accessible diagnostics at high z are rest-UV to optical interstellar (IS) absorption features and nebular emission lines, which probe neutral and warm ionized gas phases. Winds are identified through their kinematic imprint: centroid velocity offsets and broad wings of blueshifted IS absorption relative to the systemic redshift (e.g., from stellar features), redshifted Lyα profile (accessible at z > 2), and broad line emission typically underneath a narrower component arising from star-forming regions in the galaxy. 8 Alongside understanding the physical drivers of outflows, a major goal of studies at high z is to assess their role in galaxy evolution. To this aim, population-wide censuses are essential to reveal the global and time-averaged impact of outflows, reducing biases from selection on properties that would be closely linked to the strongest activity. Such censuses have been greatly facilitated with the advent of optical and near-IR MOS and IFU instruments. IFU observations have proven particularly powerful, by (i) locating the launching sites and constraining the extent of outflowing gas, and (ii) facilitating the separation between large-scale gravitationally-driven and outflow-related motions that both contribute to velocity broadening in integrated spectra. 4.6.1. Outflow Demographics at z ∼ 2. Much like in the nearby Universe (e.g., Veilleux et al. 2005), SF-and AGN-driven winds at high z are distinguished on the basis of their velocities, spatial origin, and excitation properties (Figure 8). SF-driven outflows with velocities up to several 100 km s −1 are detected from shifted IS absorption and Lyα emission (e.g., Shapley et al. 2003, Weiner et al. 2009, and from broad FWHM ∼ 400−500 km s −1 emission in Hα, [NII], and [SII] on galactic and sub-galactic scales (e.g., Genzel et al. 2011, Newman et FWHM: Full width at half maximum. al. 2012a. In deep ∼ 1−2 kpc resolution IFU+AO observations, the broad emission arises from extended regions across the galaxies and is often enhanced near bright star-forming clumps. The line excitation properties are consistent with dominant photoionization by young stars and possibly modest contribution by shocks. Faster AGN-driven winds with velocities up to a few 1000 km s −1 in z ∼ 2 galaxies hosting luminous log(LAGN/erg s −1 ) > 45 AGN are identified from various rest-UV/optical tracers (see reviews by Fabian 2012, Heckman & Best 2014. In near-IR observations, spatially extended broad emission with typical FWHM ∼ 1000 − 2000 km s −1 is detected in Balmer as well as forbidden [NII], [SII], and [OIII] emission (precluding significant contribution from high-density broad-line region gas; Nesvadba et al. 2008, Cano-Diáz et al. 2012, Cresci et al. 2015. It typically originates from the center of galaxies, can extend over 5 − 10 kpc for luminous QSOs, and both broad and narrow component line ratios indicate high excitation. SF-and AGN-driven winds follow distinct demographic trends, most clearly revealed in a recent near-IR IFU study of a sample of ∼ 600 primarily mass-selected galaxies at 0.6 < z < 2.7, covering a wide range in both mass and star formation activity levels (9.0 < log(M /M ) < 11.7 and −3.6 < ∆MS < 1.2, see Figure 8; Förster . SF-driven outflows are observed at all masses, with an incidence that correlates mainly with star formation properties, and more specifically the MS offset, specific and absolute SFR, and ΣSFR. In contrast, the incidence of AGN-driven outflows (identified based on the combination of rest-optical line profiles and multi-wavelength AGN diagnostics) depends strongly on stellar mass and measures of central stellar mass concentration, irrespective of the level and intensity of star formation activity. The strong differentiation in resulting stacked spectra and decoupling in incidence trends suggest little cross-contamination between dominant SF-and AGN-driven winds. Several aspects are important in interpreting demographics and comparing between studies. In both nebular emission and IS absorption tracers, the ability to detect an outflow depends on the strength of the wind signature (along with S/N and spectral resolution of the data), such that the trends in incidence partly reflect trends in outflow properties. Slower or weaker winds are more difficult to detect, especially in nebular lines because of the blending with emission from star formation, which underscore the advantage of IFU data in enabling the removal of large-scale orbital motions of the host galaxy. IS absorption features integrate along the line of sight, are sensitive to outflowing material over a wider range and to lower gas densities, and probe material over physical scales up to tens of kpc hence plausibly average over longer timescales. In turn, the emission line technique is more sensitive to denser material closer to the launching sites (as evidenced by high-resolution Figure 8 Distinction between star formation-and AGN-driven ionized gas winds at z ∼ 1 − 3 (top and bottom rows, respectively), in terms of spatial, spectral, and demographic properties (left to right). The maps show two galaxies observed with SINFONI+AO and HST at FWHM resolution of ∼ 1.8 kpc, with the stellar rest-optical light and narrow Hα emission from star-forming regions shown in red and green colors, and the broad Hα+[NII] outflow emission shown in white contours. The composite spectra are constructed from near-IR IFU observations with KMOS and SINFONI, where the continuum was subtracted and large-scale orbital motions were removed based on the narrow Hα velocity maps prior to stacking. The demographic trends are based on the fraction of individual objects exhibiting the spectral signatures of SF-and AGN-driven outflows. IFU maps), making it a more instantaneous probe of outflows. Differences in spatial scales probed, along with possibly less collimated winds in "puffier" higher z galaxies (Law et al. 2012c), may lead to different dependences with galaxy inclination. Given the trends with galaxy properties discussed below, results will also depend on the sample selection and parameter space coverage. For SF-driven winds, qualitatively similar trends in incidence, or in strength and velocity width of the wind signature, with measures of star formation activity have been found in many other studies. Quantitatively, there are some notable differences especially between studies using different techniques. For instance, among the full near-IR sample studied by Förster , the global fraction of SF-driven outflows is ∼ 11%, reaches ∼ 25% − 30% at ∆MS ∼ > 0.5 dex or ΣSFR ∼ > 5 − 10 M yr −1 kpc −2 ; no strong trend with galaxy inclination is found (see also, e.g., Newman et al. 2012b). These fractions are lower than the ∼ > 50% based on the occurrence of blueshifted IS absorption lines after accounting for anisotropic geometry (trends with inclination in these studies are stronger) and clumpiness of the outflowing gas (e.g., Weiner et al. 2009, Kornei et al. 2012). These differences in incidence are consistent with different physical and time scales of outflows probed by each technique and possibly reflect differences in sample selection (mass-vs. UV-selected). The interdependence between SFR, M , and z, and the choice of criteria employed to identify/exclude AGN, may introduce residual trends with M (e.g., Weiner et al. 2009, Freeman et al. 2019. In general, SF-driven outflows appear to become most prominent above ΣSFR ∼ > 0.5−1 M yr −1 , suggesting a higher threshold for wind breakout that may be related to the geometrically thicker, denser, and more turbulent ISM in high z galaxies (e.g., Newman et al. 2012b). Turning to AGN-driven outflows, near-IR studies considering the full galaxy population highlighted a steep increase in incidence towards higher masses, most pronounced above log(M /M ) ∼ 10.7 (e.g., Genzel et al. 2014b, Leung et al. 2019, qualitatively tracking the behavior for AGN fractions identified in flux-limited surveys (e.g., in X-rays; see Section 3.7). The tight positive trends with measures of central stellar mass concentration (such as Σ and Σ 1kpc ; Förster Schreiber et al. 2019, see also ) may not be surprising in light of the observed correlations between these properties and M , and the elevated fraction of AGN among compact SFGs in log(M /M ) ∼ > 10 samples (e.g., Barro et al. 2014a, Rangel et al. 2014, Kocevski et al. 2017). Among AGN, the frequency and/or velocities of outflows appears to increase with LAGN, consistent with simple expectations where more luminous AGN can drive more powerful winds (e.g., Harrison et al. 2012, Brusa et al. 2015a, Leung et al. 2019. In terms of absolute fractions, most studies imply fairly large outflow fractions among AGN, in the range ∼ 50% − 75% except for Leung et al. (2019), who report a lower 17% incidence (but similar trends with galaxy and AGN properties). Leung et al. (2019) noted in their sample that the outflow fraction among AGN is roughly constant with M , and so is LAGN, concluding that AGN can drive an outflow with equal probability irrespective of the host galaxy mass and that observed trends among the full galaxy population reflect those in AGN luminosity coupled with a (mass-independent) Eddington ratio distribution. Detailed comparisons between all studies are still hampered by the heterogeneity in sample size, selection, AGN and outflow identification, data sets (IFU vs. slit spectra) and S/N, but broadly support the picture that more powerful AGN-driven outflows become common in the most massive galaxies. 4.6.2. Properties of star formation-driven winds. Outflow velocity, mass, momentum, and energy properties across the galaxy population are essential to constrain the physical drivers of winds and impact of stellar feedback on the evolution of galaxies (e.g., Dutton & van den Bosch 2009, Davé et al. 2017. By necessity, many simplifications are involved in interpreting the data of high z galaxies, usually in the context of idealized models consisting of a conical or spherical geometry, with the velocity distribution, extent, and gas mass being the main parameters. In the theoretical framework, the outflow velocity is generally assumed to be close to the escape velocity, such that vout ∝ vc. Energy and momentum conservation arguments lead to mass loading factors η ∝ v −2 c for energy-driven winds and η ∝ v −1 c for momentum-driven winds, where η =Ṁout/SFR andṀout is the mass outflow rate (e.g., Murray et al. 2005, Oppenheimer & Davé 2006. With vc ∝ M 1/3 bar or ∝ M 1/3 (e.g., Mo et al. 1998) and M ∝ SFR on the MS, η is expected to follow a power-law with stellar mass and SFR with index −2/3 or −1/3 for energy-or momentum-driven winds, respectively. There is a strong predicted differentiation in vout ∝ Σ α SFR , with α ∼ 0.1 for energy-driven winds and α ∼ 2 for momentum-driven winds (e.g., Strickland et al. 2004, Murray et al. 2005. These scalings are consistent with recent cosmological zoom PROPERTIES OF OUTFLOWS AND THEIR POWER SOURCES v out : Outflow velocity, estimated from the profile of the emission or absorption line wind tracer. Methods based on the centroid or median velocity shift relative to the systemic value probe the bulk of outflowing gas. Other methods including the line width at a fraction of peak amplitude or of cumulative flux/absorption probe the wind velocity distribution. R out : Outflow radial extent, most easily and directly obtained from maps of emission tracing the wind gas. n e,out , N H : Local electron density and hydrogen column density of the outflowing gas. M out : Mass of outflowing material; it is ∝ L br n −1 e,out for ionized gas emission tracers where L br is the luminosity of the broad outflow-related line component, and ∝ NH Rout vout for IS absorption tracers. M out : Mass outflow rate, estimated as Mout × (vout/Rout). η: Mass loading factor, the ratioṀout/SFR. E out ,ṗ out : Outflow energy and momentum rates, 1 2Ṁ out v 2 out andṀout vout; the ratio with stellar or AGN luminosity L and momentum rate L/c constrains the wind power source and driving mechanism. L SFR , L AGN : Bolometric luminosity of the stellar population, dominated by young massive stars such that LSFR ∼ 10 10 SFR, and from the AGN, estimated from, e.g., X-ray, SED modeling, or nebular line emission. Measurements of vout rely on parametrizations of the observed line profiles, and various approaches have been followed depending on the diagnostic and the data set (e.g., based on the FWHM or full width at zero power of emission tracers, centroid or fractional absolute or cumulative absorption for IS lines). Despite these differences, studies generally find results consistent with vout/vc ratios within a factor of a few around unity, with a linear or slightly sub-linear vout −vc trend (e.g., Weiner et al. 2009, Erb et al. 2012. Given that the escape velocity vesc ≈ 3 vc for realistic halo mass distributions , these results indicate that the higher velocity tail of the outflowing gas may escape from galaxies, and more easily so in lower-mass galaxies, but that recycling may not be negligible. The broad Hα emission is well suited to estimate mass outflow rates and energetics. Assuming case B recombination and an electron temperature Te = 10 4 K, the mass of ionized gas in the outflow can be estimated via Mout ∝ L br,0 (Hα) n −1 e,out where L br,0 (Hα) is the intrinsic luminosity in the broad emission component and ne,out is the local electron density, from which the mass outflow rate can be computed asṀout = Mout (vout/Rout) where vout and Rout are the outflow velocity and extent (e.g., Genzel et al. 2011, Newman et al. 2012a). Calculations typically assume H dominates the mass and apply a 36% mass correction for He. Based on these relationships, η estimates in the range ∼ 0.1 up to above unity were derived on galactic and sub-galactic scales (e.g., Genzel et al. 2011, Newman et al. 2012a,b, Förster Schreiber et al. 2019, Freeman et al. 2019. While details in assumptions and samples vary among studies, a key difference lies in the adopted value for ne,out, which ranges between ∼ 50 and ∼ 400 cm −3 . No significant or mildly positive trends of η with stellar mass were found in the larger samples spanning 9. Mout ∝ CΩ C f NH Rout vout (where CΩ and C f are the angular and clumpiness covering fractions, and NH is the column density), as well as ISM chemistry and radiative transfer effects on the line profiles (e.g., Veilleux et al. 2005). Under reasonable assumptions, η ∼ 1 were found in outflow studies of SFG samples employing this technique (e.g., Weiner et al. 2009, Kornei et al. 2012). Comparing wind momentum and energy rate, pout =Ṁout vout andĖout = 0.5Ṁout v 2 out , to the momentum and luminosity input from star formationṗ rad = LSFR/c and LSFR, most results are in the rangesṗout/ṗ rad ∼ 0.1 − 1 anḋ Eout/LSFR ∼ 10 −4 − 10 −3 and thus consistent with momentum-driven winds powered by the star formation activity (e.g., Genzel et al. 2011, Newman et al. 2012a, but see Swinbank et al. 2019 for a contrasting result). Trends of vout ∝ Σ 0.2−0.4 SFR found in other studies from emission and IS absorption diagnostics suggest a possible mixture of momentum-and energy-driving (e.g., Weiner et al. 2009. Estimates of ne,out through the density-sensitive but weak [S II] λλ6716,6731 doublet have long been hampered by S/N limitations. A first reliable broad+narrow Gaussian decomposition in very high S/N stacked spectra (Figure 8; Förster yielded ne,out ∼ 380 cm −3 for the outflowing gas (and ne,HII ∼ 75 cm −3 for the narrow star formation-dominated component). These results suggest the outflowing gas may experience compression, supported by enhanced broad component [NII]/Hα ratios in the same stacks as well as from multiple diagnostic (total) line ratios for some bright individual star-forming clumps (Newman et al. 2012a) and samples with multi-band near-IR spectra (Freeman et al. 2019). Different outflow gas densities adopted in the literature can account for much of the differences in η and other outflow properties, as the observables themselves (broad-to-narrow Hα flux ratio, vout, and Rout) are fairly comparable. With the new evidence suggesting higher ne,out, a lower range of η (< 1) in the warm ionized gas phase would seem favored. Taken at face value, low mass loading factors and the lack of evidence for an anticorrelation with galaxy stellar mass appear to be in tension with theoretical expectations and numerical simulations, for which η ∼ > 0.3 − 1 at log(M /M ) ∼ 10 and η ∝ M α with α in the range −0.35 to −0.8 (e.g., Lilly et al. 2013, Muratov et al. 2015. The tension is compounded by the vout results suggesting that some fraction of the outflowing gas may not be able to escape from the galaxy's potential (reducing the effective η). Notwithstanding all the simplifications made and large uncertainties, the mass outflow, momentum, and energy rates discussed above almost certainly represent lower limits as they miss potentially important wind phases, as seen in local starburst galaxies where the neutral and cold molecular phases dominate the mass and the hot phase dominates the energetics (e.g., Veilleux et al. 2005, Heckman & Thompson 2017. 4.6.3. Properties of AGN-driven winds. The role of ejective AGN feedback through "QSO mode" has been much debated in the recent observational literature. At high z, while individual luminous AGN may drive sufficiently massive and energetic outflows to suppress star formation in their host (e.g., Cano-Diáz et al. 2012, Cresci et al. 2015, Carniani et al. 2016, Kakkad et al. 2016, QSOs are rare, such that their impact on the massive galaxy population as a whole and in the long run has remained unclear. The more recent studies based on rest-optical emission lines of larger z ∼ 2 samples, encompassing unbiased (massselected) populations and/or AGN selected in deep X-ray surveys, both covering broader ranges in AGN luminosities (in some cases down to log(LAGN/erg s −1 ) ∼ 42.5 − 43) are shedding new light on this issue (e.g., Förster , Talia et al. 2017, Leung et al. 2019. A first general conclusion is that with typical high velocities ∼ 1000 km s −1 , AGN-driven winds are in principle able to escape the galaxies and even the halo. The outflow velocity appears to depend on LAGN but otherwise not on galaxy properties such as M or SFR, consistent with the AGN as main power source. Double-Gaussian fits to high S/N stacked spectra suggest dense gas with ne,out ∼ 1000 cm −3 from the [SII] doublet (Figure 8; Förster ) albeit with significant uncertainties because of the important blending for the broad emission of the fast AGN-driven winds and the doublet ratio reaching towards the high-density limit. Elevated [NII]/Hα ratios of ∼ 1 − 2 in broad and narrow emission alike for a significant subset of this sample suggests an important contribution from shock excitation. Keeping in mind all the uncertainties involved, different assumptions adopted by different authors, and large scatter among galaxies, there is overall agreement that on average the momentum and energy rates of AGN-driven outflows exceed those that could be produced by star formation alone, and are consistent with energy-driving contributing or even dominating , Leung et al. 2019, as also suggested by the vout dependence on LAGN (Talia et al. 2017, Leung et al. 2019). Mass outflow rates (compared to the SFRs) are found to be modest to low (η ∼ < 1) on average among SFGs, and possibly higher towards the sub-MS regime. While AGN-driven winds may expel ionized gas at modest rates compared to the SFRs (similarly to the SF-driven outflows), they carry substantial amounts of momentum and energy (∼ 10 times or more than the SF-driven winds). If more mass, momentum, and energy are contained in other wind phases (or if ne,out estimates are lower than adopted), all estimates would increase. Measurements in other phases are still scarce at z ∼ 2; CO observations suggest η ∼ 1 in two MS SFGs hosting AGN, one of which is a QSO (Herrera-Camus et al. 2019, Brusa et al. 2018). Even if not substantially depleting the gas reservoirs of their host, the high-velocity and energetic AGN-driven winds escaping from the galaxies may interact with halo gas, reach high temperatures with long cooling time, and help prevent further gas infall together with virial shocks. The rapid increase in the incidence of AGN-driven winds among the galaxy population at around the Schechter mass echoing the decline in specific SFR and molecular gas mass fractions ) is suggestive of a connection between AGN-driven winds and quenching, although it may not be sufficient alone to establish a causal link. Given the wide range in AGN luminosities and inferred Eddington ratios for the larger samples discussed above, the results appear to be qualitatively in line with suggestions based on recent cosmological simulations that kinetic feedback from SMBHs accreting at low Eddington ratio may be more efficient at quenching star formation through preventive feedback in the circumgalactic medium (Bower et al. 2017, Pillepich et al. 2018a). OTHER z ∼ 2 STAR-FORMING POPULATIONS We here briefly discuss specific subpopulations among SFGs that have been the focus of dedicated analyses for reasons of their extreme starburst nature and/or their role as candidate immediate progenitors to the accumulating population of quiescent galaxies at cosmic noon. Salient physical features of the latter class of galaxies are summarized as well. "MS outliers," and Submm Galaxies Whereas normal MS galaxies are predominantly disks, at all epochs a population of starbursting outliers exists that may well result from merging activity. At z ∼ 2 such starburst galaxies, defined by their SFR being more than four times higher than on the MS, represent only 2% of the mass-selected SFGs, accounting for only 10% of the cosmic SFR density at this epoch (Rodighiero et al. 2011). Modeling the SFR distribution of SFGs at fixed mass with a double gaussian reveals a similar, constant or only weakly redshift-dependent, starburst contribution of 8% − 14% to the overall SFR budget (Sargent et al. 2012). Structurally, there are indications that above-MS outliers exhibit smaller effective radii and cuspier light profiles than their exponential disk counterparts along the MS ridgeline. This is seen for nearby populations, but in rest-UV/optical and radio observations at cosmic noon as well, albeit with significant scatter and only when collecting samples over wide areas to sample the poorly populated high-SFR tail of the galaxy population (Wuyts et al. 2011b, Elbaz et al. 2011. Splitting the SFG population in below-, on-and above-MS subsets Nelson et al. (2016b) find the above-MS SFGs to feature enhanced Hα ΣSFR at all radii. Only for log(M /M ) > 10.5 is the enhancement particularly seen in the center. It should be noted though that extreme outliers (8× above the MS) have 90% of their star formation revealed only in the far-IR and often are optically thick even in Hα (Puglisi et al. 2017). Beyond structural properties, a systematic increase in gas fraction (e.g., Tacconi et al. 2020), dust temperature , and ratio of total IR to rest-8µm luminosities (Elbaz et al. 2011, Nordon et al. 2012) is seen as one moves across the MS towards higher SFRs. Not only does the amount of obscuration by dust increase (Wuyts et al. 2011b), the resulting effective attenuation law as imprinted in the IRX-β relation also varies systematically with position in SFR-mass space (Nordon et al. 2013). 9 All of these trends between MS offset and physical diagnostics suggest that the observed scatter around the MS is real, and cannot be fully attributed to measurement uncertainties associated with the various SFR tracers employed. Confirming this point more directly, Fang et al. (2018) demonstrate that independent ∆MS measurements based on 24 µm and UV-to-optical diagnostics correlate significantly. SMG: Submm galaxy. Predating the terminology of a main sequence and orthogonal to the historical background of rest-UV/optical lookback surveys is the rare population of very luminous high-z Submm Galaxies (SMGs), first discovered in the late 1990s through 850 µm observations with SCUBA on the JCMT ( ∼ > 15 beam; Smail et al. 1997). Since then, higher resolution far-IR observations have refined our understanding of the nature of SMGs, identifying multi-component morphologies in some and very compact cores with large velocity ranges in other cases (Tacconi et al. , 2008. These results point to merger-driven short-lived (∼ 100 Myr) "maximum starburst" events. ALMA 1 mm observations demonstrated that multiplicity of single-dish sources becomes increasingly common towards the bright end, with 28% of > 5 mJy sources and 44% of > 9 mJy sources being identified as blends (Stach et al. 2018). Using spectroscopic follow-up of individual components for modest samples (Hayward et al. 2018) or a statistical approach based on photometric redshifts for samples of several dozen SMGs (Stach et al. 2018) it is further apparent that both chance alignments and physically associated components make up a significant fraction of the blends, with physically associated pairs adding up to at least 30%. Accounting for multiplicity hence reduces the inferred SFRs for some of the brightest SMGs, relieving some tension with models and bringing them closer to the MS. Their MS offset is further reduced when allowing for multi-component SFHs, which have a tendency of increasing the inferred stellar mass. For this reason, Micha lowski et al. (2014) argue that SMGs reside predominantly at the high-mass tip of the MS rather than being positioned above, consequently also questioning their merger nature. The rarity of above-MS outliers and SMGs can be interpreted in terms of short duty cycles preceding a quenching event. For example, Wuyts et al. (2011b) contrast the number density of ∆MS > 0.5 outliers to the growing number density of quiescent galaxies at cosmic noon inferring timescales of order ∼ 100 Myr for the starbursting phase. Toft et al. (2014) take a different approach, in which they contrast the inferred formation redshifts of compact quiescent galaxies at z ∼ 2 to the redshift distribution of the 3 < z < 6 SMG population, finding a good match that is further underlined by their similar positions in size-mass space and consistently high characteristic velocities. Assuming an evolutionary connection, they can reconcile their relative space densities by invoking an SMG lifetime of ∼ 42 Myr. The relatively short timescales found in the above studies are consistent with the duration of the final merger phase and peak starburst around coalescence of dissipative major mergers (e.g., Mihos & Hernquist 1994, Hopkins et al. 2006). In order to reveal evolutionary connections between galaxies before and after quenching, a selection on the basis of similar structural properties (i.e., identifying SFGs in the compact corner of size-mass space where high-z quiescent galaxies reside) has become a popular approach (e.g., Barro et al. 2013Barro et al. , 2014a. After z ∼ 1.8 the number density of these compact star-forming galaxies (cSFGs) are dropping precipitously, while the number density of compact quiescent galaxies is still rising. Duty cycle arguments akin to those described in the previous Section yield typical lifetimes for this cSFG phase of ∼ 500 − 800 Myr, dependent on the precise compactness and star formation selection criteria imposed (Barro et al. 2013. cSFGs thus represent a longer-lasting phase than the one discussed in Section 5.1, which is also reflected in their larger abundance and larger range in star formation activities, from above to on and below the MS. A salient feature of the cSFG population is that both X-ray and line ratio diagnostics reveal a very high AGN fraction ( ∼ > 40% based on X-rays and up to ∼ 75% when folding in line ratio diagnostics). This enhancement in AGN activity is highly significant relative to quiescent galaxies but also compared to similar-mass SFGs that are more extended (Barro et al. 2014a, Kocevski et al. 2017. They are further found to be highly obscured, with dust cores even smaller than their stellar extent (Barro et al. 2016) and galaxy-integrated ionized gas velocity dispersions (and in one case a measurement of a stellar velocity dispersion) of several 100 km s −1 , consistent with those measured using stellar tracers in compact quiescent galaxies. The implied dynamical masses of cSFGs are comparable with their stellar mass content (Nelson et al. 2014, Barro et al. 2014b. Resolved gas kinematics of cSFGs have revealed that the large galaxy-integrated linewidths can to a large degree be attributed to unresolved disk rotation (Barro et al. 2017b. While their stellar distributions are by definition compact, the ionized gas disks are often more extended . Even when modeled with rotating disks and accounting for inclination and beam-smearing effects the resulting stellar-to-dynamical mass ratios of the more compact SFGs are close to unity and larger than that of extended SFGs , Wuyts et al. 2016b). These dynamical measurements support a picture that cSFGs are in their last stretch of star formation with already dwindling gas fractions and short depletion times. Spilker et al. (2016) and Popping et al. (2017) have come to a similar conclusion on the basis of molecular line measurements for this sub-population. Several lines of evidence highlight the resemblance in dynamical terms between cSFGs and the quiescent population to which they are candidate immediate progenitors. Compact quiescent galaxies at cosmic noon exhibit more flattened projected shapes than anticipated for a pressure supported population , their M dyn /M ratios calculated from galaxy-integrated stellar velocity dispersions using a virial mass estimator are higher for systems with flatter projected axis ratios (Belli et al. 2017a), and in four gravitationally lensed cases stellar velocity curves reveal unambiguously their rotationally supported nature (Newman et al. 2015, Toft et al. 2017, consistent with a highly dissipational formation process , Wellons et al. 2015. THEORETICAL PICTURE AND ADVANCES IN NUMERICAL SIMULATIONS Models of galaxy formation in a ΛCDM context have seen significant improvements over the past decade. In particular, great strides forward were made in resolving the so-called angular momentum catastrophy (the inability to reproduce the Tully-Fisher and rotation speedangular momentum relation of observed disks galaxies; Navarro & Steinmetz 2000), and the overproduction of stars in both low-and high-mass galaxies. Cosmological galaxy formation models still feature variations at the factor of ∼ 2 level in for example the peak stellar-tohalo mass ratio reached around M halo ∼ 10 12 M and possibly more at lower/higher masses, but they now fall within the range of uncertainties from abundance matching estimates that traditionally serve as a benchmark. Today, we face a landscape of theoretical models that can be differentiated by the physical scales they resolve, the numerical techniques they employ, and the (astro)physics they implement. The scales that are resolved dictate which physical properties can be considered "imposed" versus "emerging" from such models (see reviews by Somerville & Davé 2015, Naab & Ostriker 2017. On the largest scales, semi-analytic models can efficiently imprint the baryonic growth of galaxies on merger trees extracted from DM-only simulations with box sizes of 1−10 Gpc (Millennium, Millennium-XXL, Bolshoi, Las Damas). Effectively resolving individual galaxies at the halo scale, basic structural properties such as galaxy sizes are then evaluated through analytical recipes that either assume specific angular momentum conservation (Mo et al. 1998) or encode dependencies on both angular momentum and halo concentration (Somerville et al. 2018, Jiang et al. 2019, and are designed to capture processes such as disk instabilities and mergers. For the latter, simple energy conservation arguments are often augmented with calibrations based on idealized merger simulations to account for the impact of dissipative processes on the resulting bulges (Covington et al. 2011). While intrinsically unable to track detailed structural evolution from first principles, such models have the merit of being computationally cheap (7 CPU hours to execute a single realization producing over 10 7 galaxies). They are therefore the only type of models for which a full exploration of parameter space and a mapping of its degeneracies by means of Monte Carlo Markov Chains is feasible (e.g., Henriques et al. 2015). Inclusion of hydrodynamics comes at a major computational expense but allows key processes for structural evolution to be resolved rather than prescribed. State-of-the-art full cosmological simulations (Illustris, EAGLE, Magneticum, Horizon-AGN, Illustris TNG, SIMBA) are capable of evolving populations of 10 4 − 10 5 galaxies in 10 − 300 Mpc boxes with sufficient resolution (baryonic particle masses of ∼ 10 6 −10 7 M , sub-kpc gravitational softening lengths) to track their internal structural development and kinematics. With a temperature floor of 10 4 K and the reliance on subgrid recipes to infer cold gas fractions, they are complemented by zoom-in simulations of more than 100 times enhanced mass and spatial resolution, which are capable of resolving Jeans mass/length scales, giant molecular cloud formation and a self-consistent modeling of the multi-phase ISM (e.g., FIRE, Auriga, VELA). Further down the series of Russian dolls come simulations of isolated galaxies or ISM slices in an external potential resolved down to parsec scales (e.g., SILCC). They are ideally suited to track the multi-phase breakdown of the ISM including the chemistry of molecular gas formation, the local injection of energy and momentum by late stages of stellar evolution and its coupling to the surrounding medium (e.g., effects of peak driving vs. supernovae exploding after stars migrate away from their birthclouds, ISM porosity and the possibility of feedback energy escaping through the path of least resistance, impact of the IR opacity on the effectiveness of radiation pressure, ...). The hydro-solvers employed in generating the above multi-scale simulations range from grid-based Adaptive Mesh Refinement (AMR) to Smoothed Particle Hydrodynamics (SPH; with refinements to better capture contact discontinuities and shock fronts, Hopkins 2015), and include hybrid moving mesh approaches (Springel 2010). In common between these models, the physics of gravity, hydrodynamics, cooling and heating, star formation and evolution (SNIa, SNII, AGB), chemical enrichment (tracking up to 11 individual elements), black hole growth, and stellar and AGN feedback are now routinely implemented. Increasingly, also the impact of other processes, such as magnetic fields, radiation pressure, cosmic rays and even the formation and destruction of dust are explored, albeit some of them restricted to the highest resolution simulations only. Qualitatively, overcoming the hurdles posed by the angular momentum problem and the observed inefficiency of galaxy formation took the implementation of strong feedback processes. How exactly this goal is most realistically achieved through numerical and/or subgrid recipes remains a matter of intense debate, on which resolved observations of galaxy structure and kinematics aim to shed light. Implementations of stellar feedback differ in their injection velocities, mass loadings and directionality, and whether or not wind particles are temporarily decoupled from hydrodynamic interactions to prevent numerical overcooling. Likewise, AGN feedback as a term covers a considerable range of implementations, starting from the choice of black hole seeding, whether or not boosting factors are applied to conventional Bondi accretion, directionality and continuity/stochasticity with which gas particles are being heated or receive kinetic kicks. Different choices are made regarding which gas particles this thermal/kinetic energy is imparted on, and whether (e.g., Illustris TNG) or not (e.g., EAGLE) different prescriptions are applied in the high vs. low accretion rate regime. Consequently, predictions on key wind properties are still in flux, with for example TNG winds being faster but of lower mass loading than those in its Illustris precursor. This illustrates the continued need for empirical guidance. Last but not least, significant work on the interface between simulations and observations is enabling ever more consistent comparisons. This starts with the basic question of what it is that constitutes a galaxy's stellar mass, and related, what it is that observers are measuring. Pillepich et al. (2018b) illustrate how aperture-based masses (as opposed to total stellar masses integrated out to the virial radius) can significantly alter our view on the stellar mass function and SMHM relation, particularly for the most massive galaxies featuring extended wings, but even so at the knee of the SMHM relation. Bringing the models yet more into the observational realm, post-processing with advanced radiative transfer techniques (SKIRT, Sunrise, Powderday) are enabling mock observations accounting for the effects of light weighting, dust extinction and reprocessing, including also ionized and molecular gas line emission. These can aid refined calibrations of observational diagnostics and SED modeling techniques, and are adopted in feasibility studies for upcoming observing facilities. SUMMARY AND OUTLOOK This article highlighted some of the key insights emerging from increasingly complete population censuses and increasingly detailed studies of individual galaxies back to the cosmic noon epoch. Many global and resolved properties tracing the stars, gas, and kinematics are well probed down to log(M /M ) ∼ 10 (or below). Current results draw a consistent broad picture (see Summary Points) and raise the next questions for future work (see Future Issues for a selection). The knowledge gained from these observations has contributed to transform -in some aspects, profoundly -our view of galaxy evolution. The emerging picture is encapsulated in the equilibrium growth model summarized in Section 1.1, and discussed by Tacconi et al. (2020) in relation to the evolution of the characteristic timescales of the processes controlling galaxy growth including cosmic accretion, merging, galactic gas depletion and star formation, internal dynamics, and gas recycling. The state-of-the-art in our knowledge of the properties of z ∼ 2 SFGs is illustrated in Figures 9 and 10. The censuses and scaling relations allow a depiction of the evolutionary and dynamical state of SFGs in relation to the MS. Coupled with the assumption that mass-ranking of galaxies is conserved, this cross-sectional view of the galaxy population at different epochs can be translated to tracks representing the average evolution of individual galaxies. The outcome of such an approach is shown in Figure 9 for a galaxy reaching the stellar mass of the Milky Way by the present day. Cosmic noon is the main formation epoch of stars in z ∼ 0 galaxies of masses similar and up to ∼ 2 × higher than the Milky Way (log(M /M ) ∼ 10.7 − 11), which account for as much as ∼ 25% of the local stellar mass budget. In turn, resolved mapping is now possible for various tracers of the baryon cycle from gas and star formation to metal enrichment and feedback, of the dynamical state, of processes leading to the build-up of galactic components and their imprint on the distribution of stars. Such comprehensive sets at the currently best achievable ∼ 1 kpc resolution (unlensed) are still limited to small numbers of z ∼ 2 SFGs; Figure 10 shows one example. SUMMARY POINTS 1. Two key observational aspects have driven major advances in our understanding of how galaxies evolved since cosmic noon by providing unparalleled comprehensive views of distant galaxies: (i) the concentration in "legacy cosmological fields" of photometric and spectroscopic surveys across the electromagnetic spectrum, and (ii) the growing samples with stellar structure, star formation, and gas kinematics Left: Evolutionary history of a Milky Way-mass progenitor galaxy. Tracks of different global properties are plotted as follows: gas mass fraction fgas (magenta), SFR (blue), M (red), gas-phase metallicity (grey), rest-optical Re (black), and stellar angular momentum ∝ ReM vrot (purple). Each curve is normalized to a maximum of unity to highlight the relative rate of variations between the properties with lookback time. The stellar mass growth is derived from abundance-matching following Hill et al. (2017), and the other curves are computed from evolving scaling relations at the corresponding M (z) (Speagle et al. 2014, van der Wel et al. 2014a,Übler et al. 2017. Though simplistic (e.g., the progenitor is assumed to remain on the MS and other relationships all the time), the plot illustrates how current empirical censuses and scaling relations allow us to investigate the average evolution of individual galaxies. Right: The same evolutionary track of a Milky Way-mass progenitor presented in the M − SFR diagram, against the backdrop of the z ∼ 2 SFG population (blueshades marking logarithmic steps in number density; based on Speagle et al. (2014) and Tomczak et al. (2014)). Markers indicate the structural/dynamical state, mode of star formation, and where feedback processes become increasingly apparent. The vertical red bar marks the characteristic mass M , which has remained approximately constant since cosmic noon. resolved on subgalactic scales. Mass selection is routinely used, allowing more complete population-wide censuses of physical processes driving galaxy evolution. 2. Scaling relations between galaxy stellar mass, SFR, metallicity, gas content, size, structure, and kinematics are in place since at least z ∼ 2.5, indicating that regulatory mechanisms start to act on galaxy growth within 2 − 3 Gyr of the Big Bang. There is significant evolution in population properties: compared to z ∼ 0, typical SFGs at z ∼ 2 were forming stars and growing their central SMBH ∼ 10× faster from ∼ 10× larger cold molecular gas reservoirs. Disks are prevalent but smaller, more turbulent, and thicker that today's spirals. Quenching was underway at high masses, through mechanisms that appear to largely preserve disky structure. 3. Resolved stellar light, star formation, and kinematics on scales down to ∼ 1 kpc point to spatial patterns -more pronounced in higher mass SFGs -from dense and strongly baryon-dominated core regions with possibly suppressed star formation to more actively star-forming outskirts. Whether these patterns reflect inside-out growth/quenching scenarios, or carry the imprint of strong radial gradients in extinction and efficient dissipative processes in gas-rich disks is open. The detection of large nuclear concentrations of cold gas and kinematic evidence of radial inflows in the most massive galaxies support the latter scenario, in which case massive but highly obscured stellar bulges may still be rapidly growing. Figure 10 State-of-the-art observations detailing the evolutionary state and probing the baryon cycle of a z = 2.2 massive MS galaxy (M ∼10 11 M ). The maps show, clockwise from the top left, the rest-frame UV and U band emission dominated by unobscured continuum light from young massive stars; Hα emission from moderately unobscured HII regions; CO(4 − 3) emission revealing the cold molecular gas fueling largely obscured star formation; rest-frame ∼ 5000Å light tracing the bulk of stars; the stellar mass distribution; Hα velocity field and dispersion map tracing gravitational motions; broad low-amplitude emission in Hα+[NII] revealing high-velocity outflowing gas; [NII]/Hα ratio sensitive to the excitation and physical conditions of the nebular gas. The FWHM resolution is shown by the white ellipse in each panel. Despite a clumpy appearance in UV/optical stellar light and Hα, the kinematics and stellar mass map reveal a massive rotating yet turbulent disk hosting a dense bulge-like component. The bulge may still be growing out of the massive central molecular gas reservoir, which may be replenished through inward gas streaming along a bar or spiral arms as hinted at by the inner isovelocity twist, double-peaked central dispersion, and ∼ 5000Å morphology. The weak [NII]/Hα radial gradient in the outer disk could indicate a shallow metallicity gradient, consistent with efficient metal mixing within the turbulent gas disk and/or through galactic outflows. The elevated [NII]/Hα ∼ 0.7 at the center signals the presence of a (low-luminosity) AGN. Ionized gas is being driven out of the galaxy through star formation feedback near the location of the brightest UV/optical/Hα clump, as well as through AGN-driven feedback near the nucleus. Based on data presented by Förster , Tacchella et al. (2018), and obtained from the ALMA archive (program 2013.1.00059.S, PI Aravena). 4. Outflows traced by warm ionized and neutral high-velocity gas act across a wide swath of the galaxy population. SF-driven winds dominate below the Schechter mass and are more ubiquitous and/or stronger at higher star formation levels, but may largely remain bound to the galaxy. AGN-driven winds dominate at higher masses, with rapidly rising incidence and/or strength with stellar mass and central concentration thereof. Improved constraints suggest dense, possibly shockedcompressed ionized material in both outflow types, leading to modest sub-unity mass loading factors in the warm ionized phase. The high duty-cycle AGN-driven winds are sufficiently fast to escape their massive host and heat halo gas, tantalizingly suggesting a preventive form of AGN feedback contributes to quenching. FUTURE ISSUES 1. What is the origin of scatter in galaxy scaling relations (between M , SFR, size, gas content, metallicity, ...)? Is any scatter around the observed relations attributed to short-term stochasticity (i.e., the equivalent of "weather") or an imprint of a longterm differentiation in growth histories among SFGs of the same mass at a given epoch? If the latter, what (halo) property other than mass is most appropriate to describe the SFG population as a two-parameter family? 2. What is the physics responsible for setting the gas turbulence? The redshift evolution of σ0 can be understood in the framework of marginally stable disks with gas fractions that are dwindling with cosmic time. Yet, at fixed redshift, no clear correlation with galaxy properties is emerging that would unambiguously identify the main driver of turbulence. Is this because of limited dynamic range sampled, significant contributions from unresolved non-circular motions, other observational factors? Results from strongly-lensed galaxies indicate elevated dispersions on scales down to a few 100 pc, but samples are still small and limited in galaxy mass coverage. Tighter constraints on spatial variations and anisotropy (as observed in nearby disks) will be helpful in addressing these questions. 3. What is the origin of the high baryon fractions and concentrations of SFGs? A robust trend of increasing baryon fractions with redshift up to z ∼ 2.5, and a correlation with increasing surface density, are emerging from disk modeling of IFU kinematics. Several lines of empirical evidence, supported by theoretical work, point to the important role of efficient transport of material from the halo to the disk scale and further inwards to the bulge in the gas-rich high z disks. More direct constraints are needed on gas inflows onto and within galaxies, and on the relative importance of radial transport vs. inside-out growth in setting the structure of galactic components and, possibly, contributing to star formation quenching. 4. Where do massive z ∼ 2 SFGs form their last stars before they get quenched? Balmer decrement maps for individual galaxies and bolometric UV+IR SFR maps accounting for potential gradients in dust temperature will be required to address whether half-SFR sizes at the tip of the MS are smaller than, equal to, or larger than the half-stellar mass sizes inferred from multi-wavelength HST imagery. 5. What are the total mass loading and energetics of galactic-scale winds, and the breakdown into multi-phase components? Much of our knowledge about wind properties and demographics is based on the warm ionized and neutral phase. A more holistic view on wind properties and their impact on galaxies will strongly benefit from the combination of multi-phase tracers, still limited to small numbers of more extreme objects and very few normal MS SFGs at high z. A few pilot programs suggest that, akin to what is seen in nearby starbursts, the bulk of the mass flow may be in the molecular phase, highlighting the importance of cold molecular gas kinematics to fully capture their role in galaxy evolution and baryon cycling. 6. What are the exact mechanisms responsible for the shutdown of star formation in massive galaxies? The increase in the prevalence of massive bulges, dense cores, and powerful AGN and AGN-driven outflows at high galaxy masses, where the specific SFR and cold gas mass fractions drop, suggest they likely play a role in galaxy quenching. The evidence of an association with quenching, however, remains to date largely circumstancial, and further observational constraints are needed to pin down the mechanism(s) at play and establish causality. 7. How do galaxies below ∼ 10 9−9.5 M fit into the emerging picture anchored in the properties of higher mass populations? Low-mass galaxies are still poorly explored because of current observational limitations. If an increasing proportion of the lowmass population has prolate/triaxial structure, how can we interpret their kinematics until we can fully resolve them? Do scaling relations break down at these masses? The outlined questions, among others, frame the observational (and theoretical) landscape for the next decade, with exciting progress anticipated from developments on the instrumentation scene. NOEMA and ALMA are leveraging our knowledge about the stellar component and ionized gas with that of the cold molecular gas. The combination of the multi-IFU KMOS and the new sensitive AO-assisted ERIS single-IFU at the VLT will expand samples with kinematics, star formation, and ISM conditions from near-IR observations, and resolve them on sub-galactic scales down to ∼ 1 kpc. JWST at near-and mid-IR wavelengths will open up an unprecedented window on the earliest stages of galaxy evolution, charting the progenitor populations of cosmic noon galaxies. The giant leap in resolution afforded by diffraction-limited instruments on the next generation of 25-40 mclass telescopes, such as the first-light imager and spectrometer MICADO and the IFU HARMONI at the ELT, will be the next game-changers ( Figure 11). With unparalleled sharp views of the galaxy population on the scales of individual giant molecular cloud and star-forming complexes, the era of extremely large telescopes will undoubtedly dramatically boost our knowledge and change our approach to studying galaxy evolution across all times. Illustration of the gain in angular resolution from current to future facilities. For this simple illustration, optical imaging of the nearby M83 spiral galaxy (at a distance of 4.5 Mpc, based on data presented by Larsen & Richtler 1999) is redshifted to z = 2 and boosted up in luminosity by a factor of ∼ 20 (following the MS evolution) but no other evolution is considered (e.g., in size or gas fraction). The left panel shows the original color-composite map at a resolution corresponding to 35 pc. Successive panels to the right are simulated color-composite images for observations with HST and AO-assisted instruments on 8 m-class telescopes at a resolution of ∼ 1.5 kpc, with the JWST /NIRCAM imager at a resolution of ∼ 700 pc, and with the ELT/MICADO first-light instrument reaching a diffraction-limited resolution of ∼ 100 pc (pixel sampling is adjusted for each instrument). Simulations with the SimCADO software (Leschinski et al. 2016) indicate that compact cluster-like sources with luminosities comparable to those of bright super star clusters in nearby starburst galaxies can be detected and characterized with on-source integrations of a few hours. Such objects at z ∼ 2 might be progenitors to today's metal-rich globular cluster population (e.g. Shapiro et al. 2010). DISCLOSURE STATEMENT The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. ACKNOWLEDGMENTS We are grateful to our many colleagues and friends for stimulating, critical, and inspiring discussions throughout the years, which have all contributed to shape the present work. We thank the members of the SINS/zC-SINF, KMOS 3D , PHIBSS, and 3D-HST teams for their input and involvement in various aspects covered in this article. We give our special thanks to Harrison for sharing information in advance of publication. Star-forming galaxies at cosmic noon is a vast topic that rests on a much richer body of work than can be included in a single article within space allocation -we have strived to provide useful references through which further work can be found. SUPPLEMENTAL TABLES The Tables below are associated with Figures 2 and 3 of the main article, which feature a selection of extragalactic surveys providing relevant samples at cosmic noon epochs, either specifically targeting objects or having a significant number of sources overlapping with the 1 ≤ z ≤ 3 interval. Table 1 lists the photometric and spectroscopic surveys, acronyms or brief description, and the main reference for the source catalogs used in Figure 2. Table 2 focusses on the near-IR IFU surveys plotted in Figure 3, with their acronyms or brief description, the main IFU instrument and observing mode used, and the reference for the published galaxy sample properties. of objects targeted, redshift range, and galaxy properties of detected subsets in Figure 3 and in fourth column refer to the published samples in the references given in the last column. c KLASS also targeted z > 7 galaxies. SUPPLEMENTAL TEXT: SPECTRAL AND KINEMATIC MODELING The past decade has seen important developments in modeling of the spectral energy distribution (SED) and kinematics data of distant galaxies, to derive their stellar populations properties such as stellar mass, age, star formation rate and history as well as their dynamical properties such as circular velocity and dynamical mass. Deriving these fundamental properties is essential to place observed galaxies in the theoretical framework of galaxy evolution through comparisons with (semi-)analytical models and numerical cosmological simulations. As spectral and kinematic data sets are growing rapidly in both sample size and detail of information, increasingly sophisticated approaches are being developed to improve the efficiency of modeling codes and treat adequately the various parameter correlations involved. Here we summarize basic ingredients and methods employed in state-of-the-art SED and kinematic modeling applied to data of high-redshift galaxies. Spectral Modeling The translation from SEDs to physical quantities describing a galaxy's stellar mass, star formation rate or history requires the use of stellar population synthesis (SPS), dust, and ideally photoionization models. This is the case for SEDs sampled at any spectral resolution, and we therefore discuss these techniques indiscriminately of R. The ingredients to SPS models include a stellar spectral library, a set of isochrones, an IMF, and a star formation history (SFH). Each of these components is discussed in depth in the review by . Here we highlight a few succinct aspects of particular relevance to the study of distant galaxies. The stellar library is to cover a range in stellar metallicities, effective temperatures and surface gravities appropriate for the stellar population hosted by the galaxy under consideration. Since empirical libraries are composed from spectral observations of stars in the Solar neighborhood, they may lack or cover too sparsely certain regions of parameter space that potentially could contribute significantly to the integrated emission of early galaxies. In order to include very sub-or super-Solar metallicities, or stars caught during short-lived evolutionary phases such as Wolf-Rayet (WR) or thermally-pulsating asymptotic giant branch (TP-AGB) phases, theoretical libraries can be employed instead, even though these are not without flaws themselves, ranging from the treatment of convection to the quality and completeness of atomic and molecular line lists underpinning them. A hybrid approach has been applied as well, in which theoretically motivated differential corrections are applied to empirical spectra to provide a denser and more complete sampling of parameter space, e.g., in metallicity and elemental abundance (Conroy & van Dokkum 2012). Short-lived evolutionary phases also pose a challenge when pairing stellar libraries with isochrones to construct so-called single (i.e., mono-age) stellar populations. Approaches alternative to the "isochrone synthesis" technique have been explored by, e.g., Maraston (2005) who adopted the fuel consumption theorem in which the amount of hydrogen and/or helium consumed is taken as integration variable, in principle allowing luminous, short-lived evolutionary stages such as TP-AGB stars to be captured more fully. With substantial contributions to the rest-frame near-IR they were argued to significantly impact inferred galaxy stellar ages and masses, particularly at cosmic noon where characteristic stellar population ages match the phase where TP-AGB stars are most prominent (∼ 3 × 10 8 − 2 × 10 9 yr). That said, observational efforts at intermediate (Kriek et al. 2010) and higher spectral resolution (Zibetti et al. 2013) failed to find strong TP-AGB spectral signatures in those galaxies where they ought to be most prominent, potentially explained by (self-produced) dust attenuating the TP-AGB light. An extensive review on the IMF, and evidence for potential deviations from the standard IMF, is presented by Hopkins et al. (2018). Claims of non-universality of the IMF based on observations of distant galaxies themselves (e.g., a top-heavy IMF in order to reconcile number counts of submillimeter galaxies with models (Baugh et al. 2005), or to reconcile a census of the cosmic SFR density and stellar mass assembly history (Wilkins et al. 2008)) are not unambiguous in their interpretation (see, e.g., Safarzadeh et al. 2017). More convincing evidence, of a more bottom-heavy IMF in nearby ellipticals with high velocity dispersions, was provided based on three orthogonal lines of inquiry: IR spectroscopy (Conroy & van Dokkum 2012), dynamical modeling ) and gravitational lensing (Treu et al. 2010). While the peak in SFH of these galaxies can be traced back to the cosmic noon era, an application of such IMF variations has yet to find its way into direct look-back studies, with as additional complication that the respective IMF changes may be confined to the central regions of these galaxies . Finally, SFGs are not well represented by single stellar populations, and need to be modeled with extended SFHs. Here, a common approach has originally been to parametrize the SFH by an exponentially declining, so-called τ model, largely because of its historical roots in SPS modeling of nearby early-type galaxies to which the technique was applied first. Renzini (2009) made the case that rising SFHs may be more appropriate for SFGs at cosmic noon, and hence more flexible functional forms (delayed τ models, log-normal SFHs, or double power laws) are increasingly being adopted. Offering yet more freedom, Pacifici et al. (2015) adopt a more extensive and physically motivated library of SFHs drawn from a semi-analytical model of galaxy formation, and conclude that a quantification of the normalization, slope and scatter of the stellar mass -SFR relation can be severely biased if both quantities are inferred from a common, oversimplified approach. In the same vein advocate the use of more flexible non-parametric (i.e., piecewise constant) star formation histories, and stress the importance of adopting appropriate priors. Attenuation by dust, present in copious amounts within massive SFGs at cosmic noon, has a dimming and reddening effect on the emerging SED. With the exception of the potential presence of a bump at 2175Å, often attributed to PAH molecules, its wavelength dependence is smooth, but nevertheless leaves a signature that is highly degenerate with variations in stellar age and/or metallicity. Whereas the most common approach is to adopt the Calzetti et al. (2000) reddening law calibrated locally on a sample of starbursting galaxies, in recent years first strides are made to map the attenuation curves at high z and their variation as a function of galaxy type directly (e.g., Kriek & Conroy 2013. As an aid in breaking age-metallicity-dust degeneracies, SPS modeling codes increasingly are capable of accounting for far-IR constraints, where available. Any emission absorbed at short wavelengths should contribute to dust heating with associated reprocessed emission at long wavelengths. Several state-of-the-art SPS modeling codes such as MAGPHYS ) now incorporate such energy balance arguments as well as Bayesian inference to explore parameter space. If not known spectroscopically, redshifts can be fit for simultaneously by these codes, enabling a self-consistent assessment of the error budget, including covariances. As a third component besides the SPS and dust models, photoionization codes such as CLOUDY (Ferland et al. 2017) or MAPPINGS (Sutherland & Dopita 2017) can be employed to superpose on the stellar emission the anticipated nebular lines. This is indispensable for full spectral fitting, but contributions from nebular line emission can also matter (and provided proper modeling even help) at lower spectral resolutions, especially when mediumor narrow-bands are included or for galaxies with high specific SFRs (e.g., . and present comprehensive overviews of the ingredients to photoionization models and recent advances in their application and calibration to galaxies across cosmic time. The need for redshift-appropriate calibrations was brought to light by the observation of systematic shifts in the characteristic strong rest-optical line ratios captured in excitation diagrams, revealing the evolving ISM conditions (see Section 3.6) as well as the changing shapes of the ionizing radiation field. Topics of current debate in this regard entail, from a modeling perspective, the role attributed to stellar rotation, binary evolution, and stellar mass loss in determining the amount of ionizing photons and their hardness (e.g., Eldridge & Stanway 2012). SPS codes equipped with grids from photoionization models generally implement this in a selfconsistent manner such that line intensities are tied to the metallicity and star formation history of the stellar population, but nevertheless the dimensionality of the problem is typically increased by the introduction of additional free parameters, such as the extra attenuation to HII regions. Overall, it is well established that stellar mass represents the quantity on which SPS techniques can place the tightest constraints, as its inference requires an assessment of the mass-to-light (M/L) ratio only, to zeroth-order blind to the physical conditions responsible for setting this M/L (i.e., the balance of age, metallicity and dust attenuation). Whereas systematic differences arise depending on the assumptions made, code-by-code comparisons at various levels of control suggest that at least in terms of mass ranking a high degree of consistency is reached (Mobasher et al. 2015). SFRs can be more challenging in the presence of large columns of dust, in which case panchromatic information aids greatly. Star formation histories represent the most challenging inference in the case of SFGs. Looking ahead, a few avenues can be identified for future progress in this area. First, with the increasing availability (and with the advent of JWST also wavelength coverage) of spatially resolved information, SPS modeling can be applied to SEDs extracted on subgalactic scales. This has the merit of allowing to trace the stellar build-up in situ, but in addition can mitigate outshining effects. Whereas resolved SED modeling may go at the expense of wavelength coverage and sampling, galaxy-integrated constraints can be imposed (see, e.g., . Second, in almost all applications to date a uniform metallicity is adopted for the entire stellar population. In future work, one could envision star formation and chemical enrichment histories to be coupled self-consistently, an approach that several of the aforementioned codes already allow for in principle. Exactly what constitutes a self-consistent treatment is an issue that may not be addressed trivially, as the connection between the two histories is modulated by gaseous in-and outflows, both of which are ubiquitous around cosmic noon. Finally, a full interpretation of galaxy spectra and emission lines would ideally account not only for full SPS but also for radiative transfer. Such full-fledged 3D radiative transfer modeling is to date restricted to a handful of very nearby galaxies for which very high-resolution datasets are available (e.g., De Looze et al. 2014). Much simplified analytical descriptions of absorption and scattering under different geometries such as homogeneous mixtures, (clumpy) foreground screens, and mixtures thereof can be applied via analytical recipes to interpret the distribution of line strengths and ratios resolved on kiloparsec scales within 100s of nearby galaxies (e.g., Li et al. 2019). Kinematic Measurements and Modeling To date, kinematics of distant star-forming galaxies come exclusively from observations of emission lines, mostly Hα or other rest-optical nebular lines, or CO transitions in the submillimeter regime. The best constraints are obtained from integral field unit (IFU) spectroscopy or interferometry, providing simultaneously the full three-dimensional (3D) data, which is the focus in what follows. Galaxy-integrated and slit spectra have been used to derive kinematic properties, and slit spectra were also modeled, following similar approaches as outlined below adapted for that type of data (e.g., Weiner et al. 2006, Price et al. 2016. The data are usually interpreted in the framework of axisymmetric rotating disks motivated by the observations (Section 4 of the main article) where physical quantities of interest include for instance the intrinsic peak rotation velocity vrot and local velocity dispersion σ0 10 , and the total dynamical mass of the system M dyn . Various approaches are followed, ranging from simple determinations based on the observed maximum velocity difference and line widths measured directly from the data or estimated by adjusting a parametric representation of the velocity curve and dispersion profile (e.g., computed for an exponential distribution, or approximated by an arctan function) to full forward modeling of the data. The simpler methods use one-dimensional (1D) major-axis profiles or 2D maps extracted from the data cubes. The flux, velocity, and dispersion are usually obtained by fitting the observed line emission with a single Gaussian, which was shown to be adequate for the typical resolved scales and S/N levels of high-z galaxy data . Deviations from a single Gaussian may become appreciable in galaxyintegrated spectra depending, for instance, on the spatial distribution of the tracer emission line, the local intrinsic gas velocity dispersion, and the possible presence of strong galacticscale winds; these effects should be taken into account in estimating rotation velocities from galaxy-integrated line widths (e.g., de Blok & Walter 2014, . Spatial beam smearing, instrumental spectral resolution, and galaxy inclination i are treated explicitly by rescaling of the observed maximum velocity and local dispersion through functions or lookup tables based on mock beam-smeared rotating disk models parametrized in terms of R beam /Re and galaxy properties (such as mass and inclination for σ0), by subtracting in quadrature the instrumental broadening from the measured dispersion, and by dividing the projected velocity by sin(i) derived from the morphology (e.g., , Johnson et al. 2018. Studies applying forward modeling perform fits of 1D profiles, 2D maps, or 10 For an exponential model, the maximum velocity is reached at a radius Rmax = 2.2 R d = 1.3 Re, where R d is the disk scale length and Re the effective radius enclosing half the light, in which case measuring a v 2.2 at 2.2 R d is the same as vrot. The choice between vrot and v 2.2 depends on the goal of the analysis. Deviations from an exponential distribution change the Rmax/Re , such that measuring the maximum vrot is ideally done from the velocity curve rather than at fixed radius. The quantity σ 0 refers to the velocity dispersion across the galaxy as a measure of "turbulence," which, in the case of a disk and isotropic dispersion, is related to its geometrical thickness. It is to be distinguished from the total velocity dispersion σtot measured from the line width in source-integrated spectra (which includes line broadening from galaxy-wide velocity gradients) and from the central velocity dispersion commonly employed in the analysis of early-type systems (which would be strongly dominated by beam-smearing of the steep inner velocity gradient for a disk). To minimize line broadening caused by inner disk velocity gradients, σ 0 is best measured away from the central regions. In data of high-z galaxies, σ 0 may also include contributions by noncircular motions on scales below the resolution element ( ∼ < 1 − 5 kpc depending on data set). 3D cubes. The effects of resolution and inclination are treated implicitly by convolving the intrinsic, inclined 3D model with a kernel representing the point and line spread functions (PSF and LSF; e.g., , di Teodoro & Fraternali 2015. Kinematic modeling codes developed specifically for application to observations at high redshift have become increasingly sophisticated in recent years to allow more flexibility in model assumptions, more efficient parameter space exploration, and better quantification of uncertainties of the best-fit values accounting for covariances. Examples include the DYSMAL code (e.g., Davies et al. 2011, with recent updates described by Wuyts et al. 2016,Übler et al. 2018, GalPaK 3D (Bouché et al. 2015), and 3D BAROLO (di Teodoro & Fraternali 2015), all based on axisymmetric models but differing in ingredients and dimensional space in which fits are performed. The most recent version of DYSMAL allows to fit multiple mass components, such as a disk and a bulge, with relative mass ratios specified and parametrized as Sérsic profiles; the code self-consistently accounts for finite thickness (turbulent disk, flattened rotatin bulge) based on . The baryonic component(s) can be embedded in a dark matter halo with a choice of profile parametrizations (e.g., "NFW," Navarro et al. 1996;double power-law;cored Burkert 1995 profile). The lineof-sight velocity distribution is computed from the total mass model and relative weights can be applied to the light of different components. DYSMAL is optimized to fit in 1D or 2D although 3D fitting also is possible. GalPaK 3D is designed to fit simultaneously structural and kinematic parameters directly in 3D data cubes, assuming a light/mass component among several choices (e.g., exponential, Gaussian, and de Vaucouleurs profiles), different parametrizations of the velocity profile (e.g., arctan, inverted exponential, hyperbolic, or computed from the 3D mass model). Both DYSMAL and GalPaK 3D employ Markov chain Monte Carlo (MCMC) algorithms in a Bayesian framework to derive the best-fit parameters and uncertainties thereof. 3D BAROLO fits tilted ring models to 3D data, where each concentric ring is parametrized independently and is randomly populated with line emitting clouds in six dimensions (three in each of spatial and velocity space), from which line profiles are built and projected into the model cube. This method can more naturally account for possible variations in orbits with radius, such as warps. The code uses a multidimensional downhill simplex solver for the minimization of non-analytic models, with uncertainties estimated via a Monte Carlo method. In principle, fitting in 3D space offers a number of advantages as it avoids the necessary loss of information in extracting the projected 2D maps or 1D profile from both data and model. In practice, the success of the fits can be hampered by low S/N and irregular or clumpy light distributions. For axisymmetric mass distributions, most of the information is encoded along the line of nodes, such that the parameters can be well determined from 1D fits; for sufficiently high S/N and well resolved galaxies, 2D maps can constrain more accurately the inclination. Especially at high redshift, the morphology in line emission can be quite different from the underlying mass distribution and cannot be captured by simple representations, let alone in 3D (which ideally would best account for projection and light weighting effects); in such cases, fits are best performed only in velocity and dispersion. Despite the flexibility afforded by the above models, the observations may not allow to constrain well all possible parameters but the implementation of Bayesian analysis and MCMC algorithms have brought a major improvement over previous modeling by allowing for priors and propagation of uncertainties including covariances rather than simply fixing values. The residuals between observed data and best-fit kinematic model can be used in implementing the kinematic classification scheme discussed in Section 4.3. An alternative classification method relies on kinemetry, introduced by Krajnović et al. (2006) to analyze data of nearby early-type galaxies and adapted for applications to IFU studies of distant SFGs by . Kinemetry is a generalization of surface photometry to the higher-order moments of the line-of-sight distribution, where the degree of (a)symmetry in the velocity field and dispersion map along best-fit ellipses is quantified through harmonic expansions. The exact values of the parameters will depend on the resolution and S/N regime of the data, such that the boundaries to distinguish between disks and mergers need to be appropriately calibrated for the data sets under analysis.
2020-07-30T02:06:08.497Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "cf9dae0950e1cbd9cc27e0de30bbd40d06c73eee", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.10171", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bd28f2c8f291ae9afd93ace66633b4e0404337ea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13681568
pes2o/s2orc
v3-fos-license
Rheological Behavior of Tomato Fiber Suspensions Produced by High Shear and High Pressure Homogenization and Their Application in Tomato Products This study investigated the effects of high shear and high pressure homogenization on the rheological properties (steady shear viscosity, storage and loss modulus, and deformation) and homogeneity in tomato fiber suspensions. The tomato fiber suspensions at different concentrations (0.1%–1%, w/w) were subjected to high shear and high pressure homogenization and the morphology (distribution of fiber particles), rheological properties, and color parameters of the homogenized suspensions were measured. The homogenized suspensions were significantly more uniform compared to unhomogenized suspension. The homogenized suspensions were found to better resist the deformation caused by external stress (creep behavior). The apparent viscosity and storage and loss modulus of homogenized tomato fiber suspension are comparable with those of commercial tomato ketchup even at the fiber concentration as low as 0.5% (w/w), implying the possibility of using tomato fiber as thickener. The model tomato sauce produced using tomato fiber showed desirable consistency and color. These results indicate that the application of tomato fiber in tomato-based food products would be desirable and beneficial. Introduction Tomato (Lycopersicon esculentum Mill.) is one of the most popular fruits over the world because of its unique visual appeal, taste, and nutritional value as it contains ascorbic acid (vitamin C) and lycopene [1]. Processed tomato products such as purees and sauces are a primary source of tomatoes in contemporary diet. Considerable research has been undertaken in the past to quantify and elucidate the natural consistency and structure of tomato products [2]. From a structural point of view, most tomato products are aqueous dispersion containing aggregated or disintegrated cells and cell wall material dispersed in water soluble tomato components. The consistency of processed tomato products arises from the cell wall components such as cellulose, semicellulose, pectin, and interactions among these components [2]. Cellulose is major component of vegetable cell wall suspensions and it is also the main component that affects the rheology of processed tomato products. Pectins are embedded naturally within the cellulose backbone and they are also found in the serum phase. They are known to contribute to the structure of tomato products significantly depending on the processing conditions [3][4][5][6]. Homogenization is a key processing step in the production of ketchup, sauces, and other tomato products. The homogenization process decreases the mean particle size of 2 International Journal of Analytical Chemistry the tomato suspensions and imparts smoother texture and higher viscosity. It also alters the nature of the suspensions network and increases the viscosity of the suspensions [7,8]. During homogenization, tomato pulp is subjected to very high turbulence, shear, cavitation, and impact when it is forced through the homogenizer [9]. The homogenization process was found to alter particle size distribution, pulp sedimentation behavior, serum cloudiness, color, and microstructure of tomato juice, by disrupting the suspended pulp particles [10]. High pressure homogenization was reported to decrease the particle size due to the disruption of matrix and increase tomato product's Bostwick consistency, probably due to the formation of fiber network [11]. The large discrete cells and cell fragments of tomato suspensions were easily degraded by homogenization which resulted into higher water-holding capacity [6,7,11]. The high pressure homogenization reduced the mean particle size and narrowed the particle size distribution thereby increasing the total surface area and the interaction among the particles [12]. Bengtsson et al. reported that the nonhomogenized tomato suspensions had swollen cell structure with relatively few cell aggregates; however, the homogenized suspensions contained large number of degraded cell fragments [13]. Tomato peel is a by-product of tomato industry and fiber is extracted from tomato peel using chemical method [14]. Tomato peel fiber contains about 80% of total dietary fiber (mainly water insoluble fiber) much higher than other vegetable by-products [15]. Due to its unique chemical composition and functional properties, tomato peel fiber can be used as a food supplement to improve physical, chemical, and nutritional properties of food products. However, the color and flavor of tomato peel fiber must be considered carefully to avoid their negative impact on the sensorial characteristics of the final products [16]. To date the tomato fiber has received very little research attention despite its ability to contribute to desirable food texture and good mouth feel. To the best of our knowledge, there is no study on the effect of high shear and high pressure homogenization on the tomato fiber. Thus, this study aimed to study the effects of high shear and high pressure homogenization on the morphological and rheological properties of tomato fiber suspensions. We also compared the morphological, rheological, and color parameters of homogenized tomato fiber suspensions with those of commercial tomato ketchup and a model tomato sauce formulated for comparison. We believe that the findings presented in this paper will provide better understanding of the functional properties of tomato fiber and help broaden its application as an important thickening ingredient in food industry. Materials and Methods . . Materials. The tomato fiber sample was kindly provided by COFCO Tunhe Co. Ltd., Beijing, China. The solid content of this fiber sample was determined and found to be 4.80% (w/w). This fiber sample contained 2.11% (w/w) insoluble dietary fiber as tested following the AOAC Official Method 991.43 [17] and 1.12% (w/w) protein as tested using China's national food safety standards [18]. The tomato fiber was produced by concentrating and separating the solid part out of the tomato paste (without tomato peels or seeds), by using high speech rotary mechanical instrument. The food grade tomato paste (29.0 ∘ Brix cold break), tomato ketchup, sugar, soybean fiber, and salt used in this study were provided by COFCO Tunhe Co. Ltd., Beijing, China. Deionized water was used to prepare samples. . . Mechanical Treatments. The tomato fiber suspensions were prepared in four concentrations (0.1%, 0.25%, 0.5%, and 1%, w/w) by mixing raw tomato fiber with adequate amount of deionized water as calculated based on the moisture content of tomato fiber. The shearing treatments were carried out using a laboratory disperser (IKA Ultra-Turrax T25, Germany). The tomato fiber suspensions were subjected to 3400 rpm, 5000 rpm, 8000 rpm, 10000 rpm, 12000 rpm, and 14000 rpm for 12 minutes each. The above-mentioned sheared samples were homogenized using a high pressure homogenizer (ATS AH100D, Shanghai, China), which is a lab-scale homogenizer equipped with valve. The maximum pressure of this homogenizer is 140 MPa. The homogenization was carried out for 2 passes at 0 MPa, 5 passes at 5 MPa, and then another 5 passes at 10 MPa. . . Determination of Morphology. Twenty milliliter of untreated, sheared, and homogenized suspensions were separately placed in colorimetric tubes. Images were captured with a digital camera in order to compare the appearance of these suspensions. The microscope images of all the abovementioned samples were acquired. Very small drop of each sample was placed on a microscope slide and the pictures were taken using a microscope (Olympus CX31, Japan) at 100x and 400x magnification. . . Rheological Measurements. Rheological measurements were performed using AR2000ex rheometer (TA Instruments Ltd., Crawley, UK). This is a controlled stress, direct strain, and controlled rate rheometer coming with torque range from 0.0001 to 200 mN⋅m and high stability normal force from 0.01 to 50 N. The parallel plate was used for all the tests. The temperature was controlled by a water bath connected to the Peltier system in the bottom plate. A thin layer of silicone oil was applied on the edges of samples in order to prevent evaporation. The linear viscoelastic region was determined for each sample through strain sweeps at 1 Hz (data not shown). Viscoelastic properties [storage ( ), loss ( ) modulus, and loss tangent ( )] of samples were determined within the linear viscoelastic region. The samples were allowed to equilibrate for 2 min before each measurement. The steady shear tests were performed at 25 ∘ C over the shear rate range of 0.01-100 s −1 to measure the apparent viscosity. A steel cone geometry (60 mm diameter, 59 m gap) was chosen for these measurements, since cone geometry is more preferable for viscosity measurement. The frequency sweep tests were performed at 25 ∘ C over the angular frequency range of 0.1-10 rad/s. The strain amplitude of these frequency sweep measurements was selected to be 1% according to the strain sweep results (data not shown) in order to confine these tests within linear viscoelastic region. An aluminum parallel plate geometry (40 mm diameter, 1 mm gap) was chosen for these measurements. Creep experiments were carried out at a fixed shear stress of 7.958 mPa at 25 ∘ C. The variation in shear strain in response to the applied stress was measured over a period of 2 min. An aluminum parallel plate geometry (40 mm diameter, 1 mm gap) was chosen for these creep measurements. . . Preparation of Tomato Sauce. The formulation of tomato sauce samples used in the first round of tests is provided in Table 1. The tomato paste and homogenized tomato fiber or soybean fiber were mixed according to this formulation. Required amount of water was added to make the mass of the sample to be 110 g. The homogenized tomato fiber with 2.5% concentration was prepared as described in Section 2.2. The formulation of tomato sauce for second round of tests is shown in Table 2. Two hundred grams of sauce was prepared for each formulation by measuring and mixing ingredients listed in Table 2. The mixture was then heated at 95 ∘ C for 10 min in a water bath with continuous stirring. The sauce container was covered during heating to minimize the evaporation of water. The sauce was finally cooled down to ambient temperature. . . Analysis of Physicochemical Properties. Bostwick consistency was determined using a standard 24 cm Bostwick Consistometer with 48 × 0.5 cm graduations (Endecotts ZXCON-CON1, London, UK). Seventy-five mL of sample was used to perform these tests. As the fluid flows down the instrument, the measurements were carried out after 30 seconds. Colorimetric tests were performed using a spectrophotometer (Hunter Lab UltraScan VIS, Reston, US) in transmission mode. The samples were filled into a 10 mL quartz transmission cell with 10 mm path length. The , , and values were calculated by the averaging the data of triplicate runs. The suspensions were shaken to achieve uniformity in color immediately before measurement. The pH and total acidity of samples were measured using an automatic acid analyzer (Metrohm 877 Titrino plus, Switzerland). In order to measure the Bostwick consistency, color, pH, and total acidity of the tomato source samples, the total soluble solids content was adjusted to 12.5 ∘ Brix in order to keep the same test condition. A refractometer (Atogo RX-5000 , Japan) was used for this purpose. . . Statistical Analysis. All of the above-mentioned tests were carried out in triplicate. The rheological data was obtained directly from the AR2000ex rheometer software (TA Instruments Ltd., Crawley, UK). The averaged value of triplicate runs was reported as the measured value along with the standard deviation. Results and Discussion . . Effect of Homogenization on Suspension Morphology. The effect of mechanical treatment on the appearance of tomato fiber suspensions at solid concentrations of 0.1-1.0% (w/w) is shown in Figure 1. The solid content was easily precipitated towards the bottom of the tube in all of the untreated samples irrespective of fiber concentration and the amount of sediment increased with increase in fiber concentration. The uniformity of suspensions greatly increased after shear homogenization or high pressure homogenization. The uniformity was relatively poor in shear homogenized samples at 0.1% and 0.25% (w/w) concentration compared with that of high pressure homogenized samples. The uniformity of suspensions produced by shear homogenization and high pressure homogenization was similar at 0.5% and 1.0% (w/w). It has been previously reported that the more stable network structure can be formed in tomato fiber suspension when homogenized at 9 MPa [7]. It can be observed from photographs presented in Figure 1 that the shear homogenization affects only a part of the tomato fiber, most likely from tomato flesh. The fibers from tomato pericarp could only be fragmented under high pressure homogenization. The structural features of tomato fiber particles are drastically altered by the high pressure homogenization. It has been reported that the homogenized tomato fiber suspensions consisted of smashed cellular material which eventually formed fibrous-like network while the nonhomogenized suspensions consisted of a mixture of whole cells and dispersed cell wall materials [6]. The distribution of solids in tomato fiber suspensions is illustrated in Figure 2. Dark red discrete particles are observed in untreated and high shear homogenized samples at all concentrations, while the high pressure homogenized sample showed much better uniformity in solid distribution. The high pressure homogenized suspensions containing 0.5% or 1% (w/w) fiber began to exhibit water-holding properties, indicated by the increased height of tomato fiber sample on the glass (picture not shown). It was reported earlier that the homogenized tomato fiber suspensions showed higher waterholding capacity albeit at much higher solid concentrations (10% to 21.7%) [13]. This increased water-holding capacity would be a beneficial whenever the tomato fiber is used as an ingredient to impart desired texture in food products. The information presented in Figures 1 and 2 agree with the findings in an earlier study [19] that the unhomogenized tomato juice showed whole cells with intact membranes and characteristic lycopene crystals while the homogenized samples showed large number of small particles composed of cell walls and internal constituents suspended in the juice serum. The values of colorimetric parameters ( , , and ) of unhomogenized tomato fiber suspensions at different concentration are presented in Table 3. The and values decreased with increase in fiber concentration while the value showed substantial increase. The / value, which is of vital importance in the tomato processing industry, significantly ( < 0.05) increased with the increase in concentration. The / value of 2% (w/w) tomato fiber suspension suggested that this formulation has desirable color for potential application in tomato sauces. It has also been reported in an earlier study that the values for * , * , and * increased with the increase in homogenization pressure indicating that the tomato fiber suspensions became more saturated in red and yellow color [10]. The effects of high shear and high pressure homogenization on the 1% (w/w) tomato fiber suspension are shown in Figure 3. None of the , , or parameters was significantly ( > 0.05) affected by the high shear homogenization or high pressure homogenization. In order to illustrate the morphological changes caused by homogenization, the microscopic photographs of 1% (w/w) tomato fiber suspension are shown in Figure 4 after homogenization. After high pressure homogenization, the solid tended to be evenly distributed at microscopic level (Figure 4(a)). The tomato fiber suspension showed a fibrous morphology with high degree of uniformity resembling a solution with negligibly very small amount of suspended solid after high pressure homogenization (shown in Figure 4(b)). The control samples showed unperturbed cells with intact membrane and the characteristic lycopene crystals. The homogenized samples showed a large number of small cell wall particles and internal cell constituents suspended in the juice serum which agreed with Kubo et al. observation [10]. It has been reported that no intact cells were observed in tomato pulp subjected to high pressure (479 bar) homogenization and the internal cell constituents were found to be uniformly distributed in the homogenized pulp [9]. . . Effect of Homogenization on Rheological Properties. As shown in preceding section, the texture of tomato fiber suspensions could be significantly modified by homogenization. The effect of high shear and high pressure homogenization on the apparent viscosity is shown in Figure 5. All the tomato fiber suspensions showed shear-thinning behavior regardless of the concentration before and after homogenization. The apparent viscosity of all the samples increased with the increase in fiber concentration. The high shear homogenization significantly ( < 0.05) increased the apparent viscosity compared to the untreated sample. The application of high pressure homogenization increased the apparent viscosity the most (Figures 5(a)-5(d)). Augusto et al. reported that the viscosity of tomato juice (4.5 ∘ Brix) increased when the homogenization pressure increased from 50 MPa to 150 MPa [12]. Similar effect of high pressure homogenization which was on tomato suspensions was reported in various studies [6,7,20]. The cell wall of tomato cells could be broken even at moderate shear and this rupture is linked with the increase in viscosity. The power law model (see (1)) was used to predict the variation of apparent viscosity with shear rate of tomato fiber suspensions. where is the apparent viscosity (Pa⋅s),̇is the shear rate (s −1 ), is consistency coefficient (Pa⋅s ), and is the flow behavior index (dimensionless). The values of and for all the test samples were determined by fitting (1) to experimental apparent viscosity versus shear rate data presented in Figure 5 and are presented in Table 4. The flow behavior index ( ) depends on the distribution of small and large particles and the rheology of the suspending fluid, while the consistency coefficient ( ) depends on the maximum packing fraction ( ) and the distribution of small and large particles [21]. The value increased very strongly with the increase of fiber concentration in all samples. The value, which is indicator for shear-thinning behavior, was the lowest in pressure homogenized samples, the highest in the untreated samples, and intermediate in high shear homogenized samples at a given concentration. This means that the high pressure homogenized samples are most susceptible to shear thinning. The values of storage modulus ( ) of the homogenized and unhomogenized tomato fiber suspensions are shown in Figure 6. Both the homogenized and unhomogenized samples showed a slight increase of with the increase in angular frequency. At lower fiber concentrations (0.1%-1%), the value of the high shear homogenized suspension increased more strongly compared to the unhomogenized sample. The increase of the value was the strongest in high pressure homogenized suspension which is similar to the variation of apparent viscosity with shear rate. This observation agrees with the earlier report that the homogenization process increases both storage and loss modulus of tomato suspension [7,19]. The loss modulus ( ) of tomato fiber suspensions are presented in Figure 7. The values increased with the increase in tomato fiber concentration. Both high shear and high pressure homogenization processes significantly ( < 0.05) increased the values. The high pressure homogenization appears to be more effective in increasing values as a function of angular frequency. All suspensions exhibited solid-like behavior with being higher than . Augusto et al. studied the effect of high pressure homogenization (up to 150 MPa) on the viscoelastic properties of tomato juice and found both and when the juice was homogenized [22]. The increase in homogenization pressure was also found to increase both (75.4 Pa to 212.2 Pa) and (from 49.8 Pa to 80.9 Pa) in tomato suspensions [13]. The effect of homogenization on the creep behavior of tomato fiber suspensions is presented in Figure 8. At 1% (w/w) concentration, homogenized suspensions deformed less that the control sample under the same applied stress. The high pressure homogenized sample had the largest resistance to the applied stress among all the samples. This further indicates that homogenization helps build a stronger texture in the tomato fiber suspension, which could utilized to formulate food products with desirable texture. Figure 8 also shows that the slope of the creep curve is much smaller compared to that of the control sample. This indicates that high shear and high pressure homogenized suspensions achieve an equilibrium state to maintain their solid-like structure sooner compared to the unhomogenized suspension. At the same stress, the unhomogenized suspension would continue to deform. This observation is consistent with earlier publication which reported that the homogenized tomato juice reduced the compliance of tomato juice due to stronger internal structure [19]. Based on all the rheological data presented above, it could be concluded that the rheological properties of tomato fiber could be significantly altered by the application of high shear or high pressure homogenization. The homogenized suspensions had higher apparent viscosity, higher , and and they could withstand larger external force and could maintain the solid-like structure better. . . Comparison with Tomato Ketchup. Viscosity is a key indicator of quality of tomato paste and ketchup based on which consumers make their purchasing decision [23]. The apparent viscosity of high pressure homogenized tomato fiber suspension at 2.5% (w/w) fiber concentration was compared with that of tomato ketchup of 30 ∘ Brix (Figure 9). Despite the large difference in solid concentration between the two samples, they show similar shear-thinning behavior and comparable apparent viscosity. Thus, the tomato fiber can replace other thickeners which might have been used in tomato ketchup, for example, pectin or xanthan gum. The and versus angular frequency curves of high pressure homogenized tomato fiber suspension (2.5%, w/w) and tomato ketchup (30 ∘ Brix) are presented in Figure 10. The curves of versus angular frequency of these two samples were almost identical. The versus angular frequency curves of these samples bear similar trend. The storage modulus of the homogenized fiber suspension was higher than that of the tomato ketchup within the entire angular frequency range. This indicated that the fiber suspension had stronger three-dimensional structure to resist external stress than the tomato ketchup. The viscoelastic characteristics of tomato sauce or ketchup are reported to depend on the diameter of the suspended particles water insoluble solids content [24]. The data presented in Figure 8 indicates that tomato fibers might be better choice if firmer or more solid-like texture is required. The creep diagrams of high pressure homogenized tomato fiber suspension (2.5%, w/w) and tomato ketchup (30 ∘ Brix) are shown in Figure 11. The tomato fiber suspension deformed to a lesser extent than the tomato ketchup corroborating the fact that the tomato fibers provide firmer texture than the ketchup, although the texture is also affected by concentration. According to a sensory evaluation data reported in earlier study the tomato suspension homogenized at 90 bar had significantly better thicker and smoother texture and significantly weaker graininess compared with the untreated sample [13]. . . Application of Tomato Fiber in the Formulation of Tomato Sauce. Dietary fibers such as soybean fiber are frequently added to produce tomato sauce. Thus, the effect of addition of homogenized tomato fiber or soybean fiber was measured and is presented in Figure 12. Bostwick consistency is employed in this section since it is more often used in the tomato industry than the rheological tests. A lower value of Bostwick consistency indicates a higher value of viscosity. As can be seen from this figure the addition of up to 0.5% (w/w) of tomato fiber could help the tomato sauce to achieve relatively high consistency. The amount of tomato fiber required would be one-third of the soybean fiber, to reach the same Bostwick consistency value. Typically, a tomato sauce with Bostwick consistency value of about 6-8 provides desirable texture or mouth feel of 0.2-0.5% dry fiber which is required. A comparison of difference in color between the model tomato sauces prepared by using tomato fiber and soybean fiber is presented in Table 5. The Hunter color parameters ( , , and ) and the ratio / are compared for these two formulations. A high value of / is desired in most tomato products. The / ratio containing tomato fiber is comparable but slightly higher compared to those containing soybean fiber. A slight decrease in total acidity was also observed in sauce samples containing tomato fiber. Conclusions The effects of high shear and high pressure homogenization on the morphological and rheological properties of tomato fiber were investigated. Both the high shear and high pressure homogenization processes made these suspensions much more homogeneous which enabled even distribution of fiber particles. Both the high shear and high pressure homogenization significantly ( < 0.05) increased the apparent viscosity of the tomato fiber suspensions. The apparent viscosity of the high pressure homogenized suspension was 10 times higher than that of unhomogenized one. The storage and loss modulus of the homogenized suspensions were higher than those of the unhomogenized one within the angular frequency range tested. The homogenized tomato fiber suspensions had more rigid structure compared to that of unhomogenized suspension and they resisted the deformation better (creep curve). The color and total acidity of model tomato sauce containing tomato fiber were more preferable than one containing soybean fiber at the same fiber content. The results presented Concentration of tomato fiber or soybean fiber (%) Figure 12: Bostwick consistency (cm/30 s) of tomato sauces prepared by using homogenized tomato fiber (0.19%-0.73%) or soybean fiber (0.67%-2.7%) using formula in Table 1. The red filled circles for tomato fiber; blue open circles for soybean fiber. in this paper indicate that tomato fiber can be potentially used as food ingredient such as thickener or stablizer.
2018-05-11T22:37:48.097Z
2018-03-18T00:00:00.000
{ "year": 2018, "sha1": "e7b3bf9035968ad1062f791f3069a79e915d3b5a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijac/2018/5081938.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b3bc6d784caf7f9c7c0686f369089c35e0c4d19", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
212956783
pes2o/s2orc
v3-fos-license
Assessment of the Environmental Impact of Road Infrastructure in Countries: A Study of the Namibia Scenario The assessment of the impact of road infrastructure in the developing countries using the Namibian case scenario was done based on the contemporary challenges of road use. This study employed a qualitative technique using a sample size of thirty (30) selected by the simple random sampling technique. The use of charts, tables and frequencies were done to explain certain trends in the study. A well-structured, valid and reliable questionnaire instrument was designed for the study based on the following research questions; what are the impacts of road transport on the Namibian environment and what are the possible measures that may be used to reduce environmental impact of road transport on the Namibian environment? Twenty five respondents (83%) agreed that road transfer has drastically improved development and 26 (87%) respondents also accepted that it can improve Namibian economy. Similarly, it has also improved communication and technology greatly according to 27 (90%) respondents. To add more credence to the impact of road transport on the economy, all the 30 (100%) of the respondents agreed that road transport aids mobility within Namibia while 27 (90%) respondents supported the opinion that road transport aids in job search, 3 (10%) disagreed. The result shows that car owners service are potential threats which can affect pollution levels as was supported by 97% to cause environmental pollution as well as 100% when combined with energy consumption. Similarly, all the respondents indicated the potential of increased accident rate from poor road safety. Felling of trees poses great danger (97% response) due to the degradation of the environment. Most respondents (77% and 83%) agreed that road transport can lead to land encroachment and loss of aesthetic and farming. This research has shown that eventually death rate may How to cite this paper: Nwagbara, V. U., & Iyama, W. A. (2019). Assessment of the Environmental Impact of Road Infrastructure in Countries: A Study of the Namibia Scenario. Journal of Geoscience and Environment Protection, 7, 86-101. https://doi.org/10.4236/gep.2019.712006 Received: October 5, 2019 Accepted: December 14, 2019 Published: December 17, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Introduction Global technology and scientific innovation in transportation system has resulted in improvement of global economy and caused environmental degradation. Transport plays an important part in economic growth and globalisation, but most times cause air pollution and use large amounts of land. While it is heavily subsidized by governments, good planning of transport is essential to make traffic flow, and restrain urban sprawl. Road infrastructure therefore is a part of structure; material or economic base of a society or an organization. It is a basic structure that fosters the good performance of cities', states' or countries' essential services. In this sense, for a country to have a good logistics infrastructural system in the different modes of transportation, constant investment from both public and private sectors are needed. Therefore, road transportation is the act of moving passengers or goods from one location to another. Modes of transport include air, rail, road, water, cable, pipeline and space. The field can be divided into infrastructure, vehicles, and operations. Road transport is important since it enables trade between peoples, which in turn establishes civilizations (Ovubude, 2005). Transport infrastructure consists of the fixed installations necessary for transport, and may be roads, railways, airways, waterways, canals and pipelines, and terminals such as airports, railway stations, bus stations, warehouses, trucking terminals, refuelling depots (including fuelling docks and fuel stations), and seaports. Terminals may be used both for interchange of passengers and cargo and for maintenance (Meik, et al., 2002). In the transport industry, operations and ownership of infrastructure can be either public or private, depending on the country and mode (Starkey, et al. 2012). Passenger transport may be public, where operators provide scheduled services, or private. Freight transport has become focused on containerization, although bulk transport is used for large volumes of durable items. Transport plays an important part in economic growth and globalization, but most types 3 cause air pollution and use large amounts of land. While it is heavily subsidized by governments, good planning of transport is essential to make traffic flow and restrain urban sprawl (Starkey et al. 2012). The study will pay more attention to road transportation. Since 1990 after independence, Namibia's road transportation infrastructure has enjoyed the largest outlay of central government investment compared with other modes and remains the preferred option for door to door linkage. Until recently, the policy initiative on road infrastructure development, funding, maintenance and even operations has been the sole responsibility of the various systems of government (Law Library of Congress, 2014). According to the Namibia's Greenhouse Gas Inventory for the Year 2000, greenhouse gas levels within Namibia have rapidly increased, in comparison with the greenhouse gas levels of the year 1994. The increase of the level of greenhouse gases is caused in part by the increase in road transportation within the country (Ministry of Environmental & Tourism, 2008). This study joins the debate by investigating the impact of road transportation on the Namibian environment. The environmental impacts of road transportation have amplified due to increasing transport volumes and the increasing use of road transportation. According to Robinson, 2011& Groth, (2012 transport volumes as well as the proportions of the road sector is continuously rising, this intensifies the environmental impacts. Road transportation is especially guilty of this as it contaminates the environment with the release of exhaust emissions. Within the assessment factors such as costs caused per transport are calculated. The analyses include aspects like capacity utilization, the use of environmentally friendly tires and eco-friendly driving styles. These factors influence the fuel consumption and thus also the final production of exhaust emissions (Meik, Jeo, Mendelsohn, Jenks, 2009). The different modes of transport include humanpowered, animal-powered and road transports (Meik et al., 2009;Sharpley, 2012;Robinson, 2011& Groth, 2012. Road transportation is a very popular mode of transport and yields many benefits for the economy as well as the citizens of Namibia. Not only does it allow social integration, but it enhances economic efficiency, as it decreases costs and prices and enhances trade and employment activities. Road transport 4 in Namibia has social, economic, political and environmental and energy roles. However; road transportation also poses a great threat to the environment of Namibia. Not only does the natural habitat of certain plants and animals get destroyed due to the building of road infrastructure, but the road transportation also increases the greenhouse gases and pollutes the air. Road transportation is especially guilty of contaminating the environment with the release of exhaust emissions which has negative environmental impact. Road transportation, however, consumes a lot of resources, such as the land on which roads are built, deforestation which has negative impact on the environment and also much time is engulfed in building, maintaining and operating the road transportation system. Impacts of transport on the ecosystem include direct, indirect and cumulative impacts (Petzmann, 2009). But Sharpley (2012), explained that the most important impacts of road transport on the environment relate to climate change, air quality, noise, water quality, soil quality, biodiversity and land take (invasion). This study therefore analysed the environmental and social impact of road transportation on the environment of Namibia, with the aim of finding possible solutions. 2.1: Research Design The qualitative method was adopted which has been taken as a loosely defined category of research design which is field focused and deals with subjective data in descriptive form like note, recording or other descriptions (Bless and Higson-Smith, 2000). It is sometimes referred to as interpretative, naturalistic and descriptive research, involving small groups of data. Qualitative research according to Creswell (2014) is also hypothetical, particularistic, impersonal, experimental and stable. It is the outsider's perspective of the problem and it is unbiased. The researcher proposed that this method was good to collect the best results, because the researcher manipulated them efficiently and effectively. 2.2: Sampling and Population of the Study According to Kruger et.al (2001), the concept of sampling is one of the most important in the total research endeavour as it is imperative that a researcher has to understand clearly before selecting, sampling, conducting the pilot study and main research. "Sampling refers to elements of the population considered for actual inclusion in the study" (Creswell, 2008). The sample size was selected by a simple random sampling process proportional to population. Males and Females were selected in alternation in successive sampling process, so that in the end, the sample consisted of 15 females and 15 males. The respondents will be estimated statistically using percentage. In the case of this study the considered population were some employees of Ministry of Environment and Tourism, some employees of road transport companies and rod users in Windhoek. This was a single case study in which the ideal way to investigate the impact of road transportation on the Namibian environment. The study was participated by only petite portion of the population of road users and employees of Ministry of environment and employees of some selected road transport companies involving thirty (30) participants from the total population. 2.3: Data Collection Instrument A questionnaire was designed to capture data from the participants. The structured questionnaires consisted of closed ended questions in order to capture as much data as possible for analysis. Questions included in the questionnaire were designed in such a way that they were short, clear and precise. This was to ensure that the respondents have a common understanding of the questions asked and as well make the questions short and interesting. In addition to the formal questionnaires, interviews were also conducted. An interview is "a data-collection encounter in which one person (an interviewer) ask questions on another (a respondent)" (Kruger et. al, 2001). It is "a mainstay of field research used both by participant, observers and by researchers who make no pretense of being a part of what is being studied" (Kruger et al. 2001). Interviews can be divided into three main categories, namely: structured, semi-structured, and unstructured. This research study will achieve the following objectives; analyse the impacts of road transport on the Namibian environment, evaluate the policies in place that protect the 6 environment towards road transport and recommend the possible measures that may be used to reduce the environmental impact of road transport on Namibian environment. This study is driven by the following research questions; 1. What are the impacts of road transport on the Namibian environment? 2. What are the policies in place that protect the environment towards road transport? 3. What are the possible measures that may be used to reduce the environmental impact of road transport on Namibian environment? 2.4: Method of Data Analysis Interpretation was taken from the results and analysed and making inferences pertinent to the researcher relationship with studies and drawing conclusion. The broader meaning of research data was sought. In this method the researcher compared the results and the inferences drawn from the analysis. Structured interview by the researcher helped in making conclusion about the results. The use of charts and simple tables were employed to analyse and interpret the data. The well-structured questionnaire was given to some other experts to assess in order to confirm its coherence and validity. To test for its reliability, the test-retest method was adopted using the Pearson's Product Moment Correlation Coefficient to obtain the coefficient of variation as 0.78 which indicates its reliability (reliable if coefficient of reliability is ≥ 0.5). Reliability is the degree of consistency between two measures of the same thing (Mehrens and Lehman, 2007). Reliability refers also to the degree to which the independent administration of instrument yielded a similar or the same result under comparable situation. Results The findings on the demography and questionnaire about the impact of road infrastructure on the Namibia environment were illustrated by making use of graphical and descriptive statistics. The study was carried out among the employees of Ministry of environment and tourism employees, employees of some road transport companies and road users in Windhoek. Though most of the respondents have little or no 7 knowledge about environmental impact of road infrastructure, the research had to explain this to them as the researcher interviewed some of the respondents to get vivid information from them. Grade 12 13 Table 1 shows that 2 respondents indicated that they had master's degree, 5 respondents indicated they had bachelor's degree, 10 respondents indicated that they had diploma while the remaining 13 respondents are grade 12 holders representing the major educational levels. Similarly, Table 1 also shows that 13 respondents were between the ages of 21-29 years, while 10 respondents were between the ages of 30-39 years and the remaining 7 respondents were more than the ages of 40 years representing all the major age groups. (17) From Table 3 above, 5 (17%) respondents indicated that they service their cars 'once', 10 (33%) respondents indicated that they service their cars at least 2 or 3 and 4 to 5 times yearly. While the remaining 5 (17%) respondents indicated that they service their cars 6 to 7 times yearly. Discussion of Findings From Table 3, the analysis it shows that road transportation has really created impact especially on the area of development thereby generating revenue in the area of transportation to the government and individuals. In some aspect also, road transportation has led to improve quality of lives because many under-developed communities are experiencing rodal routes thereby facilitating and easing burden of movement from one place to another. The current developments of roads promote growing concern for sustainable and eco-friendly transportation globally (Condurat, Nicuta & Andrei, 2017). Many rural areas are now far becoming township because of road transportation that passed through those areas. From Figure 4, the analysis shows that road infrastructure generate revenue for the government and individual. Transporters generates generate as they increase more automobiles on the road and this also create employment for unemployed because they can act as drivers or co-drivers which invariably create employment and generate more revenues for transport owners. The importance of specific transport activities and infrastructure can thus be assessed for each sector of the economy (Arasan & Koshy, 2013). Government of Namibia also have buses that ply different routes especially in Windhoek, transporting people from one part of the city to another. This generates revenue for the government in maintaining roads. Figure 5 shows that road transportation has eased communication process among people either via sending of mail through courier services, post office (Nampost). This has undoubtedly had a very considerable effect in the level of understanding of different groups and the mutual respect of one socioeconomic group for another (Ministry of Environment and Tourism, 2010). In developing countries, the lack of road transportation infrastructures and regulatory impediments are jointly impacting economic development by conferring higher transport costs, but also delays rendering supply chain management unreliable (Ritz and Clarke, 2010). Technology has been improved through road transportation because places that are not accessible by people are made accessible because people can travel long distance without distress, communicate effectively and also transport some communication gadgets from one place to another. The world has been reduced into a global village as a result of development in 16 transportation technology (Mendelsohn, Jarvis, Robert, & Robertson, 2009). There is therefore cause for concern while considering the road transport infrastructure base in Namibia today which compares unfavorably with those of several African nations both in terms of quality and service coverage. In particular, the rural areas, where the bulk of the population resides, are largely deprived of basic pieces of road transport infrastructure (Ritz and Clarke, 2010). In Figure 6 specifically, movement from one place to another will be limited if there was limited road transport in the country. So road transportation has invariably ease movement and made life easier in moving from places to places. Similarly, Figure 7 shows that people can always take road transport to wherever they wish to search for employment because of its affordability. Providing this mobility which is transportation provides an industry that offers services to its customers, employs people and pays wages, invests capital and generates income (FAO, 2010). Road transportation is the affordable among all different modes of transportation. Despite the affordability, it is also accessible and can access any routes especially for employment purposes. Table 3 also shows that most of the respondents service their cars averagely twice yearly. The number of times that a car is serviced has an impact on an environment especially on the area of pollution. If a car is serviced regularly, pollution will be minimized. All hazardous particles emitted from the exhaust pipe of the cars may be minimized and this will invariably reduce harmful gases that are dangerous to human health to be reduced too. Gichaga (2017) recommended geometric design of road, driver training and behaviour, vehicle maintenance and utilization of road safety parks. Figure 8 shows that road transportation may facilitate movement but it has also detrimental effect to the environment. In the area of grading, pollution is cause because of the dust that the grader will disperse. In the area of road construction, deforestation will be caused which leads to desertification and global warming as was agreed by Sharpley (2012) on the effect on biodiversity and extinction potentials. So invariably, road transportation technologies have environmental effect to the environment. Research has shown the transport sector alone contributes about 19.2% of carbon emission and has a potential to reduce emission by 4% (Abraham, Ganesh, Kumar & Ducqd, 2012). Similarly, anthropogenic activities such as deforestation and burning of fossil fuels increases greenhouse gas emissions (Abraham et al., 2012). Research has also shown that there is exponential increase in the emission of greenhouse and fuel consumption according to Condurat et al. (2017) which promotes pollution. Similarly in Figure 9, the respondents agreed that road transportation consumes lots of energy which indirectly causes pollution. Increasing noise levels have a negative impact on the urban environment reflected in falling land values and loss of productive land uses (Sharpley, 2012). From all indication it shows that vehicles which ply on the roads use energy which is derived from the petrol and diesel which contain some hazardous substances that are dangerous to human health and the environment. Rodrigue (2012) mention transportation as one of the behavioural factors that contribute to air pollution, notably in urban areas. These hazardous substances include sulphur dioxide, nitrogen dioxide, carbon monoxide, arsenic and all these substances can cause respiratory disorder, cancer, or they can suffocate someone to death. The They have turned road as an express to death, either because they are drunk, frustration or even suicidal. So inferno caused on the road transportation cannot be quantified by the benefits it brings to the society. People's lives are endangered, and many have gone because of mistakes of poor road safety. The probability of being killed by road accident is a function of where the individual resides (Wegman, 2017). Non-human factors to traffic accidents involve mostly motorbikes in Indonesia but the major remedial actions are development of public transportation, improvement of road ratio and traffic management measures (Soehodho, 2016). Felling of trees to construct road leads to deforestation which is the main cause of desertification and global warming. This means that the transport sector must be decarbonized to maintain the safety threshold of a 2 0 C increase in average temperature (Santos, 2017). This is militated against by lack of global legally binding agreement and the high relative cost of clean vehicle and energy 18 technology (Santos, 2017). Some of these gases, particularly nitrous oxide, also participate in depleting the stratospheric ozone (O3) layer which naturally screens the earth's surface from ultraviolet radiation (Sharpley, 2012). When these two conditions prevail, they destabilise the environment thereby having negative impact on the environment which is also illustrated in Figure 11. Chemicals used for the preservation of railroad ties may enter into the soil and so hazardous materials and heavy metals have been found in areas contiguous to railroads, ports and airports (Sharpley, 2012). Figure 12 indicates that many lands that could have been used for housing and farming are practically under used because they are diverted to be used for the construction of roads. This has really affected housing thereby increasing rental and prices of houses. When this occurs people tend to encroach into use of land to build informal settlements and squatter camps which invariably make this city unplanned. This agrees with the works of Sharpley (2012) From all indications it shows that most land cannot be used for farming because of the loss of nutrients. Most of them have been affected by chemicals used in construction of the road like bitumen which erodes nitrogen from ground soil when come in contact with it. So loss of aesthetics for farming on land can lead to famine and this invariably has negative impact on the environment which is illustrated in Figure 13. However, roads and highways can produce complex negative impacts. The impacts of improvement, rehabilitation and maintenance projects, although usually more limited, can still be significant, not only on natural resources and systems but also on the social and cultural environment (Ashley, 2010). From all indications as shown in Figure 14, dangerous gases emitted by vehicles plying the road can have negative health issues with people living around the environment, and this health issues might lead to 19 health complications because of excess inhalation. This was not the case as absolute levels of air pollutants emissions have considerably dropped in developed countries such as the United States (Rodrigue, 2012). The issue of transportation and the environment works hand in hand in nature since transportation conveys substantial socioeconomic benefits, but at the same time transportation is impacting environmental systems (Petzmann, 2009). Road transport also carries an important social and environmental load, which cannot be neglected (Minh, 2012). These impacts fall within three categories as corroborated by Petzmann (2009) to include direct, indirect and cumulative. Some major negative impacts of transportation on the environment can include degradation of air quality, greenhouse gas emission, increased threat of global climate change, degradation of water resources, noise and habitat loss and fragmentation (Demirel, Sertel, Kaya & Seker, 2008). Conclusion The different roles and impacts of transportation in our society have been highlighted. The negative impacts such as pollution, noise, and loss of aesthetics, consumption of energy, loss of land, safety and accidents resulting from the use of road transportation or another have been discussed in detail. Considering these negative impacts one" may be tempted to write off road transportation as highly detrimental to the development and progress of any nation. It is however evident from the positive roles played by road transportation as discussed in this paper that transportation is indispensable and relevant in the development of a society. This paper has revealed the impact of the expansion of road, in some major commercial areas of Namibia (especially the Northern parts). The loss of customers due to the demolition of business premises, accompanied by inaccessible roads to new make-shift shops had resulted into reduction in profit and meagre income for some people residing where new road constructions are done. The study recommends the provision of accessible and affordable shopping complexes for the traders and speedy completion of the road project to reduce the hindrance of customers in reaching the business area which affects the economy of the informal workers. There is the need to take into cognisance the informal sector and their space requirement while planning the city. Findings 20 by Ovubude (200) have shown that the movement of passengers and freight in rural areas of Nigeria are comparatively smaller than those of intra-urban movement. However, more needs to be done to reduce the emission of particulates and oxides of nitrogen. As with other pollutants, the imposition of increasingly demanding targets is the single most effective way of stimulating improvements in the vehicle fleet. Governments therefore need to impose more demanding air quality standards, and to require action to achieve these standards first in those areas where larger numbers of people are exposed. Land consumption for transport infrastructure can adversely affect biodiversity and contribute to urban sprawl, hence there is need for interaction between land use and transport planning which will help to steer transport infrastructure away from protected areas enabling infrastructure, or to share the risk in pilot applications.
2019-12-19T09:19:26.942Z
2019-12-11T00:00:00.000
{ "year": 2019, "sha1": "cea578e48a267579e0d8a3825e8b4cc357a27e14", "oa_license": null, "oa_url": "https://doi.org/10.4236/gep.2019.712006", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f5ce190ea4b8c610e25e4c79471a0c44290f2f2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
259125775
pes2o/s2orc
v3-fos-license
Microbial Influences on Immune Checkpoint Inhibitor Response in Melanoma: The Interplay between Skin and Gut Microbiota Immunotherapy has revolutionized the treatment of melanoma, but its limitations due to resistance and variable patient responses have become apparent. The microbiota, which refers to the complex ecosystem of microorganisms that inhabit the human body, has emerged as a promising area of research for its potential role in melanoma development and treatment response. Recent studies have highlighted the role of microbiota in influencing the immune system and its response to melanoma, as well as its influence on the development of immune-related adverse events associated with immunotherapy. In this article, we discuss the complex multifactorial mechanisms through which skin and gut microbiota can affect the development of melanoma including microbial metabolites, intra-tumor microbes, UV light, and the immune system. In addition, we will discuss the pre-clinical and clinical studies that have demonstrated the influence of different microbial profiles on response to immunotherapy. Additionally, we will explore the role of microbiota in the development of immune-mediated adverse events. Introduction Skin cancer is a major public health concern with increasing incidence rates worldwide. It has been estimated that melanoma alone will account for around 100,000 new cancer diagnosis cases in 2022 [1]. Over the past decade, immunotherapy has transformed the treatment landscape for advanced melanoma and other skin cancers [2]. Immune checkpoint inhibitors (ICIs) have demonstrated significant efficacy by unleashing the power of the immune system to recognize and attack cancer cells [2]. Despite the impressive clinical outcomes achieved with ICIs, a considerable proportion of patients fail to respond to therapy or develop resistance over time [2]. Several factors contribute to the variability in treatment response, including tumor heterogeneity, host factors, and environmental factors. Among these, the role of the gut microbiota in modulating the response to cancer immunotherapy has gained increasing attention in recent years [3]. The gut microbiota has been shown to shape the immune system and influence the efficacy of cancer immunotherapy in preclinical and clinical studies (as reviewed in [4][5][6]). However, the role of skin microbiota in the pathogenesis of melanoma and their response to ICIs remains largely unexplored. The skin microbiota is a diverse community of microorganisms that inhabit the skin surface and play a critical role in maintaining skin homeostasis and host defense [7]. Understanding the contribution of the skin and gut microbiota to the response of skin cancers to ICIs is essential for improving treatment outcomes and developing personalized strategies for cancer immunotherapy. In our previous work published in 2021, we discussed the role that the gut microbiota plays in overcoming the resistance to ICIs used in the treatment of different cancer types [8]. As the field has significantly evolved in the past couple of years, the role of gut microbiota has been extensively studied in many types of cancer, requiring an extensive review for its role in every cancer type alone. In this article, we aim to discuss the potential mechanisms underlying the interaction between the skin and gut microbiota and the immune system in melanoma, the current evidence supporting the role of the microbiota in ICI response, and the future implications for clinical practice. As such, this article will focus on the role of gut as well as skin microbiota in the development and response of melanoma to ICIs, providing a focused update on this interaction in melanoma patients in particular [8]. Proposed Mechanisms of Skin Microbiota Influence on Melanoma Development Many kinds of skin commensal bacteria have been shown to promote skin immunity and consequently protect against skin infections, inflammatory disorders, and malignancies [9][10][11][12]. On the other hand, some chronic skin conditions may lead to alterations in the skin microbiome. Although no causative bacterial pathogen has been identified in melanomagenesis, alterations of the skin microbiome in chronic skin conditions may lead to colonization with pathogenic bacteria, which may in turn play a role in the development of non-melanoma skin cancers [13]. In addition, some small studies have shown differences in the skin microbiota in melanomas [14,15]. In a study by Mizuhashi et al., the presence of Corynebacterium was found to be much higher in patients with stage III/IV melanoma (76.9%) compared to those with stage I/II melanomas (28.6%) [14]. In addition, Salava et al. reported a numerically decreased skin microbial diversity in melanomas as compared to benign nevi; however, the results were not statically significant [15]. The skin immune system is composed of the innate and adaptive immune system. Keratinocytes constitute an essential component of the innate immune system and produce a variety of chemokines, cytokines, and antimicrobial peptides [16]. Antimicrobial peptide production is controlled and upregulated by microbial stimuli known as microorganismassociated molecular patterns (MAMPs) [17]. In this way, skin microbial flora regulate the innate immune system. Antimicrobial peptides bind to pattern recognition receptors (PRRs) located on keratinocytes, antigen-presenting cells, and melanocytes orchestrating the immune response [18]. PRRs include intracellular cytoplasmic receptors and Tolllike receptors TLRs [19]. Persistent activation of TLRs has been implicated in chronic inflammation and skin carcinogenesis [20]. Skin barrier disruption can alter microbial homeostasis and lead to microbial dysbiosis [21]. In addition, skin microbes can contribute to barrier dysfunction by releasing proteases that damage the epidermal lining [22]. While microbiota-specific barrier disruption has not yet been proven to promote skin cancers, skin barrier disruption in general and consequent chronic inflammation has been related to non-melanoma skin cancers in multiple studies [22,23]. It also remains unclear whether the skin barrier disruption and consequent microbial dysbiosis or the microbial-induced skin barrier damage constitute the inciting event of chronic inflammation and carcinogenesis. Moreover, it has been suggested that ultraviolet radiation (UV)-induced immunosuppression and consequent skin carcinogenesis can be inhibited by skin microbiota through alteration of cytokine gene expression and immune cell infiltration in the skin [24]. In a study by Patra et al. using germ-free mice, epidermal hyperplasia and neutrophil infiltration in UV exposed skin were higher in the presence of skin microbiota, whereas mast cells, macrophages, and monocytes were more prominent in the absence of microbes [24]. In addition, genetic expression of proinflammatory cytokines was higher in colonized skin, whereas increased expression of immunosuppressive cytokines was observed in their germ-free counterparts [24]. Other studies have reported a protective effect of lactobacilli against UV-induced skin carcinogenesis [25,26]. Overall, intra-tumoral microbes can impact skin carcinogenesis by directly interacting with cancer cells or regulating other components of the tumor microenvironment (TME). Intra-tumoral bacterial composition in melanoma may influence the extent of immune cell infiltration, chemokine expression, and overall prognosis. For example, in a study using RNA sequencing data from The Cancer Genome Atlas in cutaneous melanoma, patients with low levels of intratumoral CD8+ T cells had significantly shorter survival compared to those with high levels. In this study, intratumoral bacterial load of Lachnoclostridium was positively associated with infiltrating CD8+ T cells, suggesting that intratumoral microbiota may affect intratumoral immune cell infiltration, thereby influencing survival [27]. To date, strategies to modulate the skin microbiome have not been studied as a therapeutic intervention for melanoma. There is a prospective study, SKINBIOTA (NCT 04734704), which will analyze the composition of the skin microbiota using skin swabs in patients treated with anti-PD-1 for metastatic melanoma. This study will contribute to the emerging field examining the interplay between composition of the skin microbiome and immunotherapy response and resistance. Role of Gut Microbiota in the Development of Skin Cancers: Effects on the Immune System The gut microbiota has also been shown to play a role in the development of skin cancers [28]. In fact, gut microbiota have been shown to have both oncogenic and tumorsuppressive properties that can exert specific effects on multiple types of cancers, including skin cancers [28]. In a study by Luo et al., Lactobacillus reuteri FLRE5K1was shown to stimulate the production of anti-oncogenic cytokines in mice and prevent the migration of melanoma cells, thereby preventing the development of melanoma and prolonging survival [29]. In another pre-clinical study, supplementation with VSL#3 probiotics was found to trigger the production of butyrate and propionate by gut microbiota [30]. This cascade led to the recruitment of Th17 cells, which in turn reduced lung metastases and decreased the number of tumor foci [30]. Li et al. demonstrated that transferring 11 bacterial strains, which were more abundant in mice with negative ubiquitin ligase RNF5, resulted in the development of anti-tumor immunity and limited melanoma growth in germ-free mice [31]. On the contrary, other studies have shown that gut microbiota can promote oncogenesis in skin cancers [32]. Gut bacterial profiles have been shown to be significantly different between melanoma and control patients, with changes in the bacterial composition with progression from in situ to invasive and, later, metastatic melanoma [32]. In particular, Saccharomytecales and Prevotella copri species were more abundant in advanced stages of melanoma [32]. Furthermore, Pereira et al. found that IL-6 and the microbiota of obese mice can promote the advancement of melanoma [33]. They conducted fecal transplants experiments using leptin-deficient mice and found that the transfer resulted in tumor development in lean mice [33]. In addition, microbial depletion using oral antibiotics leads to reduced burden of subcutaneous and hepatic melanoma in mice, indicating a potential role of the gut microbiota in the progression of melanoma [34]. All these studies suggest that interventions targeting the gut microbiota constitute potential therapeutic modalities to target the development and progression of melanoma. Pre-Clinical and Clinical Studies Studying the Microbial Profiles' Influence on the Response to ICIs Immune checkpoint inhibitors (ICIs) represent a significant advance in the field of cancer immunotherapy and are widely used across multiple tumor types. These drugs specifically target immune checkpoints, including programmed cell death 1 (PD-1), PD ligand 1 (PD-L1), and cytotoxic T-cell lymphocyte-associated protein (CTLA-4) ( Figure 1) [8]. Immune checkpoints are a complex set of stimulatory and inhibitory proteins that play a crucial role in regulating the T-cell immune response. They are responsible for controlling the activation of cytotoxic T-lymphocytes, maintaining self-tolerance, preventing autoimmunity, and adjusting the duration and strength of the immune response to minimize tissue damage during inflammation [35,36]. Several preclinical and clinical studies have shown that the responsiveness of multiple cancer types to immune checkpoint inhibitors (ICIs) relies on the microbiota present in the gut and the skin. [8]. Immune checkpoint inhibitors work by targeting specific mechanisms that prevent T cells from attacking cancer cells in the body. One such mechanism involves the binding of B7-1/B7-2 to CTLA-4, which keeps T cells inactive and unable to kill cancer cells. Anti-CTLA-4 antibodies block this binding, enabling the T cells to become active and attack cancer cells. Another mechanism involves the binding of PD-L1 to PD-1, which also prevents T cells from attacking cancer cells. Anti-PD-1/PD-L1 antibodies interrupt this binding and enhance the ability of T cells to target and kill cancer cells. In a study by Routy et al., the therapeutic efficacy of anti-PD-1 alone or in combination with anti-CTLA-4 was compared between antibiotic-treated (germ-free) or untreated mice with melanoma [3]. The administration of antibiotics had a considerable negative impact on the efficacy of anti-PD-1 monoclonal antibody therapy, either alone or in combination with anti-CTLA-4 antibodies, resulting in increased tumor size, reduced antitumor effects and decreased survival in germ-free mice [3]. In addition, colonizing the intestines of germ-free mice with fecal transplants rich in Akkermansia muciniphilia restored the responsiveness of melanoma-bearing hosts to ICIs, a response that had been previously inhibited with the use of antibiotics [3]. Similar results were also reported in another study using mice with melanoma treated with anti-CTLA-4 antibodies [37]. Vetizou et al. showed that antibiotic-treated mice with melanoma did not respond to anti-CTLA-4, until colonized with Bacteroides fragilis [37]. Oral supplementation with B. fragilis in germ-free mice restored the therapeutic response of anti-CTLA-4 via the induction of T helper 1 (TH 1 ) immune responses in tumor-draining lymph nodes (LN) and the promotion of the maturation of intra-tumoral dendritic cells (DC) [37]. In addition, germ-free mice with melanoma that received fecal microbiota transplant (FMT) from melanoma patients with a strong response to CTLA-4 had better outcomes after treatment with ICIs as compared to those with FMT from non-responder patients, with the former group favoring the growth of B. fragilis [37]. A study by Gopalakrishnan et al. examined the feces of 112 patients with melanoma treated with anti-PD-1 therapy. The patients' gut microbiota was examined pre-and post-treatment via 16S sequencing and metagenomic whole genome shotgun sequenc-ing. Patients with a more diverse gut microbiome had a better response to anti-PD-1 therapy compared to patients with less diverse gut microbiome. The microbiota of responding patients were enriched with the Clostridiales order, the Ruminococcaceae family and the Faecalibacterium genus, whereas those of non-responding patients were enriched with Bacteroidales [38]. Similar findings in patients who received anti-CTLA-4 have been demonstrated. In fact, in a prospective study of patients who received ipilimumab for metastatic melanoma, patients whose baseline gut microbiota was enriched for Faecalibacterium had longer progression-free survival versus patients whose gut microbiota was enriched for Bacteroidale [39]. However, this is opposed to the findings by Vetizou et al. summarized above, where Bacteroidales was associated with better response to anti-CTLA-4 [37]. Gopalakrishnan et al. also performed FMT from the responding patients and non-responding patients into mice. Mice transplanted with responding FMT had better response to anti-PD-L1 therapy. These mice were found to have a higher abundance of Faecalibacterium in their gut microbiota [38]. Finally, by studying the response of 38 patients with metastatic melanoma to anti-PD-1 and anti-CTLA-4, Andrews et al. showed that responders had different gut microbial composition compared to non-responders. Through 16S rRNA gene sequencing and shotgun metagenome sequencing of fecal samples, they showed that patients who were more likely to respond had microbiome rich in Bacteroides stercoris, Parabacteroides distasonis, and Fournierella massiliensis. However, non-responding patients were more likely to have a microbial composition rich in Klebsiella aerogenes and Lactobacillus rogosae [6]. In order to address discrepancies between different studies, McCulloch et al. assessed the microbial composition of five different melanoma cohorts [40]. In their study, time-to-event analysis revealed that the baseline microbiota composition was optimally linked with clinical outcome after about one year of treatment initiation [40]. When the combined data were analyzed through meta-analysis and other bioinformatic methods, it was found that the Actinobacteria phylum and the Lachnospiraceae/Ruminococcaceae families of Firmicutes were associated with a favorable response, whereas Gram-negative bacteria were associated with an inflammatory intestinal gene signature, increased blood neutrophil-to-lymphocyte ratio, and unfavorable outcome [40]. Two microbial signatures, one enriched for Lachnospiraceae spp. and the other for Streptococcaceae spp., were linked with favorable and unfavorable clinical response, respectively, and with distinct immune-related adverse effects [40]. Despite variations between different cohorts, optimized learning algorithms that were trained on batch-corrected microbiome data consistently predicted outcomes for programmed cell death protein-1 therapy in all cohorts [40]. In summary, these studies examining the gut microbiome of immunotherapy responders and non-responders demonstrate an association between the gut microbiome and ICI response and resistance. Taken together these findings suggest that the gut microbiome is an exciting therapeutic target to overcome ICI resistance. However, conflicting results have been published regarding the prognostic impact of specific microbial signatures, and uncertainty remains regarding the optimal evaluation and interpretation of the gut microbiome as a biomarker of immunotherapy response and toxicity. Additional large-scale studies are needed to determine if specific microbial profiles can be linked to ICI response and resistance and clinically used as biomarkers. Complete response to anti-PD-1 antibodies occurs in 10-20% of patients with metastatic melanoma, and the majority of patients who receive anti-PD-1 antibodies for metastatic melanoma will ultimately develop resistance [41]. There is an unmet need to identify biomarkers to predict ICI resistance and to develop novel treatment strategies to overcome ICI resistance [42]. One proposed strategy to overcome ICI resistance relies on using FMT. This technique requires transplantation of donor fecal matter into the recipient's intestinal tract, facilitating a transformation in the recipient's microbial composition [42,43]. Two phase 1 trials were recently published in Science investigating FMT from immunotherapy responders combined with anti-PD-1 antibodies as a strategy to overcome anti-PD-1 resistance in patients with metastatic melanoma [42,44]. Baruch et al. performed a phase I clinical trial to study feasibility, safety, and immune cell impact of FMT combined with the reintroduction of anti-PD-1 for patients with anti-PD-1 refractory melanoma [42]. Two FMT donors were chosen based on complete response to previous anti-PD-1 monotherapy for metastatic melanoma [42]. There were ten total recipients, who received equal transplantation from each donor. Three recipients showed a response to anti-PD-1 treatment, all from a similar donor (Donor #1), including one patient with complete response (CR) and two patients with partial response (PR). Stool 16S gene sequencing analysis showed a significant difference between pre-and post-treatment microbiota of the recipients. The post-treatment microbiota also differed between the recipients of the different donors. The recipient groups from donor #1 contained a higher abundance of Bifidobacterium adolescentis, whereas the recipients from Donor #2 had a higher abundance of taxa like Ruminococcus bromii. Further analysis found that responders had a higher relative abundance of Enterococcaceae, Enterococcus, and Streptococcus australis and a lower relative abundance of Veillonella atypica. Interestingly, only recipients from donor #1 upregulated some additional gene sets related to APCs activity, innate immunity, and interleukin-12 (IL-12), with the responding patients increasing the CD8+ T cell infiltration into the tumors [42]. In addition, Davar et al. showed that in patients with advanced melanoma who were resistant to anti-PD-1 therapy, the combination of responder-derived FMT and anti-PD-1 was found to be safe and effective [44]. Out of 15 patients, 6 experienced clinical benefit including 1 patient with CR, 2 patients with PR, and 3 patients with SD lasting > 12 months, and the microbiota was perturbed rapidly and durably [44]. The responders showed an increase in the abundance of Firmicutes (Lachnospiraceae and Ruminococcaceae families) and Actinobacteria (Bifidobacteriaceae and Coriobacteriaceae families) taxa that were previously linked to the response to anti-PD-1, as well as increased activation of CD8+ T cells and a decrease in the frequency of myeloid cells expressing interleukin-8 [44]. Additionally, the responders had distinct proteomic and metabolomic signatures, and the gut microbiome was shown to regulate these changes through transkingdom network analyses [44]. While it can be difficult to draw definite conclusions regarding the effect of specific microbial species on the response to ICIs given the inconsistencies between different studies, it has been noticed that Akkermansia muciniphilia, Bacteroide fragilis, and Fecalibacterium tend to be usually associated with a positive outcome, unlike Bacteroidales which usually negatively impact the response of melanoma patients to ICIs. It is worth noting, however, that microbial diversity remains the only consistent finding associated with positive response outcomes along the different studies. Tables 1 and 2 summarize the main published preclinical and clinical studies assessing the gut microbiota and response to ICIs. Despite all the above, many challenges still exist when it comes to drawing clinical conclusions from the above data. In fact, most studies cited above had small sample sizes. The majority are also non-randomized single arm early phase trials. In addition, as noted above, many studies had discordant data, and no specific microbial species has been consistently associated with a positive or negative response to ICIs in melanoma patients. That being said, several clinical trials are currently running to broaden our clinical understanding of this complex interaction. Tables 3 and 4 summarize the ongoing clinical trials and observational studies currently assessing the interaction between the microbiota and response to ICIs in melanoma patients. Resistance to response to ICI therapy in antibiotics-treated mice. Restored response of anti-tumor response via oral feeding with B. fragiles. Enhanced response with FMT from patients with increased Bacteroides spp. levels. [37] Faecalibacterium PD-1 mAb Melanoma Increased levels of Faecalibacterium led to reduced tumor size and improvement in response to ICI therapy. [38] FMT from previous anti-PD-1 responders increased Firmicutes and Actinobateria in previous non-responders. Increased CD8+ T cell activation Decreased IL-8-producing myeloid cells [44] Actinobacteria, the Lachnospiraceae/Ruminococcaceae PD-1 mAB PD-1 treated melanoma Association with decreased progression [40] Bacteroides genus and Proteobacteria PD-1 mAB PD-1 treated melanoma Association with increased progression [40] Anti-PD-1 alone or in combination with Anti-CTLA-4 Stool and saliva samples Gut microbiome's effect on 1-year PFS * Also includes non-small cell lung cancer. ** Also includes NSCLC, RCC, and triple negative breast cancer. *** Also includes patients with vitiligo or those who developed vitiligo after ICI treatment. Proposed Mechanisms through Which Microbiota Influences the Response to ICIs The mechanisms through which the microbiota influences the response to ICIs have been extensively studied. Some of those potential mechanisms are summarized in Figure 2. Proposed mechanisms of microbial influence on the response to immune checkpointinhibitors. These include production of inosine, anabolic amino acids, short chain fatty acids, as well as molecular mimicry between microbial and self-antigen. Through these mechanisms, microbiota can affect the immune cell infiltration into the tumor cells and consequent responses to immunecheckpoint inhibitors. The impact of the microbiota on anti-tumor immune cell infiltration is the most studied mechanism of interaction between the microbiota and the response to ICIs. At the level of the tumor microenvironment, a higher density of CD8+ T cells were observed in responding patients as compared to non-responding patients. This CD8+ T cell infiltration was positively correlated with the Clostridiales order, the Ruminococcaceae family, and the Faecalibacterium genus, and it was non-significantly but negatively correlated with Bacteroidales. In addition, a higher level of systemically circulating effector CD4+ and CD8+ T cells with a preserved cytokine response to anti-PD-1 therapy was associated with Clostridiales order, the Ruminococcaceae family, and the Faecalibacterium. On the other hand, gut microbiota enriched with Bacteroidales was associated with higher levels of Treg cells and myeloid-derived suppressor cells in the systemic circulation with a blunted cytokine response to anti-PD-1 therapy [38]. Moreover, and similarly to what was found in human patients, mice transplanted with responder FMT were also found to have a higher density of CD8+ T cells. Furthermore, an upregulation of PD-L1 was also established in the mice models. In fact, mice receiving FMT from responders were also found to have a higher frequency of innate effector cells and a lower frequency of suppressive myeloid cells [38]. Additionally, studies in mouse models have discovered that colonizing germ-free mice with specific gut microbiota leads to an increase in CD8+ T cell infiltration into the tumors and an increase in CD8+ and CXCR3+ CD4+ T cells in the circulation [45]. This, in turn, results in an increase in type 1 immune response [45]. Similarly, Baruch et al. demonstrated an increase in intra-tumoral CD8+ T cell infiltration in FMT recipients responding to anti-PD-1. The FMT caused an increase in CD68+ APCs infiltration into the gut lamina propria [42]. Another similar study showed that responding recipients shifted closer to the donors' gut microbiota as compared to the non-responders [44]. Increases in cytolytic CD56+ CD8+ T cells and terminally differentiated effector memory CD8+ T cells (CCR7− CD45RA+) were also noted through longitudinal single cell analyses of peripheral mononuclear blood cells and tumor-infiltrating immune cells. Regulatory T cells (Tregs) were also found to be decreased in the responding recipients [44]. Therefore, through the use of FMT, some recipients are able to respond to immunotherapy via similar mechanisms of the responding donor [45]. It is important to note, however, that research is needed to fully understand the mechanisms by which these microorganisms influence the response to ICIs. Another proposed mechanism includes immune modulation by bacterial metabolites. Bacterial fermentation of carbohydrates into short chain fatty acids (SCFAs) has been correlated with the host immunity to ICIs [46]. The effects of butyrate have been highly studied and have shown differential influences the response to anti-PD-1 and anti-CLA4 therapy. Faecalibacterium prausnitzii is considered a major contributor to butyrate formation. SCFA formation has been correlated with improvement in response to anti-PD-1 therapy [39,47,48]. However, it has been shown to blunt the response with anti-CLA4 therapy in melanoma patients [49]. In addition to the production of SCFA, gut microbiota may also induce molecular mimicry to host cell antigens. Self-reactive T cells are mostly eliminated during development; however, some are able to escape [50]. These may be activated by microbial antigens that have immunogenic properties that are similar to host cell antigens [51]. Some tumor cells express self-or neo-antigens that can be recognized by the self-reactive T cells [52]. As such, this cross-reactivity has been proven to enhance the response to ICIs, mostly by T cell-mediated killing [3,45,51]. One suggested mechanism also involves the formation of anabolic amino acids by the gut microbiota. These were found to be predominant in responding patients, while catabolic amino acids were mostly predominant in non-responding patients [38]. In fact, the biosynthesis of amino acids is proposed to stimulate host immunity [38]. Furthermore, inosine, a purine riboside, has been shown to be correlated with response to ICIs in mice. Inosine is an intestinal metabolite produced by Bifidobacterium and Akermansia muciniphilia. It has been shown to enhance TH 1 differentiation and adenosine A 2A receptor expressing naïve T cell function. Inosine modulates the response to ICIs by inhibiting the immunosuppressive activity of adenosine, which is a naturally occurring molecule that suppresses the immune response. In contrast to adenosine, inosine has proinflammatory effects on the adenosine A 2A receptor, supporting the TH 1 and its anti-tumor effects in the mice [53]. Inosine mainly acts as a competitive inhibitor of adenosine by blocking its binding to its receptors and facilitating an increased immune response against cancer cells [54]. Definition of Immune-Related Adverse Event (irAE): Limitation to Using ICIs Although use of ICIs has transformed the landscape of cancer treatment by facilitating the harnessing of the immune system to generate an anti-tumor immune response, there exist several limitations to using these drugs. First, not all patients respond to ICI, and response rates vary by tumor type [42]. In addition, ICI are expensive and may not be accessible to all patients [55]. Most importantly, ICIs can cause severe immune-related adverse events (irAEs) that can be life-threatening if not treated promptly and appropriately [56][57][58]. By definition, irAE occurs as a result of an ICI-induced "inappropriate" immune system activation against the hosts' own cells [8]. While cutaneous irAEs are among the most common, any organ system can be involved, and side effects can range from colitis to dermatitis, hepatitis, pneumonitis, as well as endocrinopathies such as thyroiditis and hypophysitis [56][57][58]. In addition, ICIs have been associated with musculoskeletal adverse events including inflammatory arthritis, myositis, and polymyalgia rheumatica [59]. While neurotoxicity, cardiotoxicity, and pulmonary toxicity are less frequent, they tend to be the most severe and life-threatening. In addition, much remains unknown about the long-term effects of ICIs on patients and the management of irAEs [56][57][58]. The Role of the Microbiota in Influencing the Rate of Immune-Mediated Adverse Events Cutaneous irAEs can range from mild pruritus to life-threatening epidermal necrolysis [56][57][58]. Hu et al. studied the effect of skin microbiota on a mouse model of cutaneous irAE [56]. Treatment with anti-CTLA-4 alone did not produce any skin inflammation in the mouse model, nor did local skin colonization with Staphylococcus epidermidis [56]. However, when the mice received concurrent cutaneous colonization with S. epidermidis and systemic anti-CTLA-4, skin inflammation developed on days 6 to 8 of treatment [56]. The inflammatory infiltrate consisted of macrophages and cytokineproducing neutrophils and monocytes [56]. This innate, hyperactive immune response to anti-CTLA-4 treatment was found to be dependent on IL-17 production by commensalspecific T cells in an excessive, dysregulated manner [56]. These findings suggest that alterations in the skin microbiome may affect development of cutaneous irAE. Colitis is a common irAE seen with both anti-CTLA-4 and anti-PD-1 antibodies and may be severe or life-threatening [60,61]. The impact of the gut microbiome on development of immune-related colitis has been extensively studied. The Bacteroidetes phylum has been associated with increased resistance to colitis. Dubin et al. analyzed the composition of the intestinal microbiota in 34 patients with metastatic melanoma being treated with anti-CTLA-4. Patients who were not diagnosed with gastrointestinal inflammation between 13 and 59 days of treatment were found to have a higher abundance of Bacteroidaceae, Rikenellaceae, and Barnesiellaceae [60]. It is thought that the Bacteroidetes phylum stimulates Treg cell differentiation, which may play a role in certain patients' resistance to colitis [60,62,63]. Another study by Chaput et al. showed similar findings. In this study, 26 patients with metastatic melanoma received anti-CTLA-4 and were closely observed for the development of colitis [39]. Abundance of the Bacteroidetes phylum was associated with resistance to colitis. In addition, patients with Firmicutes-rich microbiota were more likely to develop colitis. Interestingly, it was also found that decreased bacterial diversity was also associated with gastrointestinal inflammation [39]. However, opposing results were found in patients treated with dual ICIs (anti-CTLA-4 and anti-PD-1) [6]. In fact, patients that were more resistant to colitis had a higher abundance of Firmicutes, while patients prone to colitis had a higher abundance of Bacteroidetes [6]. Patients and pre-clinical models that had Bacteroidetes-rich profiles and that developed colitis were found to upregulate IL-1ß. This was confirmed via treatment with IL-1 receptor antagonist (anakinra), along with the dual ICIs, resulting in less inflammation. In addition, via transcriptional profiling, a prompt and selective transcriptional upregulation of Il1b was also found [6]. Liu et al. found a link between the composition of a patient's gut microbiome and their likelihood of developing irAE from anti-PD-1 antibodies [64]. Patients with a less diverse gut microbiome had a higher risk of experiencing irAE [64]. A total of 150 patients were included in the study, and irAEs due to anti-PD-1 included pruritis and/or rash, thyroid dysfunction, and mild to severe diarrhea. Patients were grouped into no/mild irAE versus severe irAE group based on a National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE V5.0) grading system. Patients experiencing severe irAEs were found to have gut microbiota abundant with Streptococcus, Paecalibacterium, and Stenotrophomonas. However, patients with mild irAE had gut microbiota enriched for Faecalibacterium. Patients experiencing each irAE were analyzed and compared to patients with no irAE. The microbiota in patients experiencing pruritis and rash had no significant difference when compared to those without irAE. However, patients with no irAE had a higher abundance of Bacteroides and Lactobacillus as compared to patients experiencing thyroid dysfunction, who had abundant Paecalibacterium. It was also found that patients with severe diarrhea had a higher presence of Stenotrophomonas and Streptococcus, whereas patients without irAEs or with mild diarrhea had higher levels of Faecalibacterium and Bacteroides [64]. In summary, these studies highlight the interplay between the gut microbiota and development of irAE. A retrospective analysis of 327 cancer patients treated with ICI for multiple tumor types found that patients who developed diarrhea or colitis had improved overall survival compared to those who did not develop diarrhea or colitis [65]. The mechanism underlying improved survival in patients who develop immune-related colitis is not known, but it is possible that the gut microbiome may play a role and could be targeted in future prospective studies. Future work is also needed to clarify specific microbial profiles affecting colitis risk, which will improve risk assessment for and management of immune-related colitis. There are multiple ongoing, prospective trials investigating the impact of the microbiome on ICI efficacy and toxicity (NCT03643289 and NCT04107168). Table 5 summarizes the role of microbiota in influencing the irAEs in skin cancer. Mice skin colonized with S. epidermidis followed by treatment with systematic ICI developed skin inflammation was seen on days 6 to 8 of treatment. No skin inflammation was seen in treatment of mice with ICI alone or colonization of S. epidermidis alone. [56] Bacteroidetes phylum (Bacteroidaceae, Rikenellaceae, Barnesiellaceae) PD-1 mAb + CTLA-4 mAb Colitis Bacteroidetes phylum was associated with increased tendency to colitis. Firmicutes-rich microbiota were associated with resistance to colitis. Conclusions In conclusion, emerging evidence suggests that the composition and diversity of the skin and gut microbiota play a critical role in modulating the efficacy of immune checkpoint inhibitors in the treatment of skin cancers. Despite the progress made in the field, several challenges remain in harnessing the potential of microbiota-based therapies to optimize immune checkpoint inhibitor efficacy. Future research efforts should aim to identify specific microbial profiles that predict the response to therapy and elucidate the molecular mechanisms underlying their effects on tumor immunity. This knowledge may enable the development of microbiota-based interventions, such as fecal microbiota transplantation or probiotics, to enhance the clinical efficacy and safety of immune checkpoint inhibitors in patients with skin cancers. agree to be personally accountable for the author's own contributions and for ensuring that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and documented in the literature. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Conflicts of Interest: The authors declare no conflict of interest.
2023-06-11T05:07:23.951Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "630d2146b7370d6016b813172882b31dc09e231d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms24119702", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "630d2146b7370d6016b813172882b31dc09e231d", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
126054162
pes2o/s2orc
v3-fos-license
New installation for inclined EAS investigations The large-scale coordinate-tracking detector TREK for registration of inclined EAS is being developed in MEPhI. The detector is based on multiwire drift chambers from the neutrino experiment at the IHEP U-70 accelerator. Their key advantages are a large effective area (1.85 m2), a good coordinate and angular resolution with a small number of measuring channels. The detector will be operated as part of the experimental complex NEVOD, in particular, jointly with a Cherenkov water detector (CWD) with a volume of 2000 cubic meters and the coordinate detector DECOR. The first part of the detector named CoordinateTracking Unit based on the Drift Chambers (CTUDC), representing two coordinate planes of 8 drift chambers in each, has been developed and mounted on opposite sides of the CWD. It has the same principle of joint operation with the NEVOD-DECOR triggering system and the same drift chambers alignment, so the main features of the TREK detector will be examined. Results of the CTUDC development and a joint operation with NEVOD-DECOR complex are presented. Introduction The aim of the large-scale coordinate-tracking detector is solving the problem of the excess of muon bundles that increases with the energy of the primary cosmic rays [1,2], that can be caused by both cosmo-or nuclear-physical reasons [3]. The only characteristic that responds differently to changes in the composition of cosmic rays and the inclusion of new physical processes is the energy of the muon component of extensive air showers [3] which, up to now, has not been investigated enough. Such studies are performed at the experimental complex NEVOD-DECOR [4]; however, the coordinate detector DECOR does not cover the entire aperture of the Cherenkov water detector (CWD) and does not exclude the possibility of several muons passing between the individual supermodules of the detector. Besides, the size of its cells limits the possibility of separating two or more particles at small distances (less than 3 cm). The new coordinate-tracking detector [5] based on drift chambers will increase the coverage of the side aperture of the Cherenkov water detector NEVOD and significantly improve the resolution of close tracks. Drift chambers Drift chambers (DC) came to accelerator particle experiments and cosmic ray studies in the late 70-ies of the last century, after proportional chambers. Their main advantage is the ability to ensure the measurement accuracy of charged particle tracks significantly better than the characteristic distance between sensitive elements (signal wires). This property provides the possibility to a e-mail: EAZadeba@mephi.ru create large-scale installations. One such setup created in the 80-ies was the IHEP-JINR neutrino detector at the U-70 accelerator, for which a large area multiwire drift chamber was developed [6]. Overall sizes of the chambers are 4000 × 508 × 112 mm 3 with a sensitive area of 1.85 m 2 that is 91% of the side chamber area. There are four sense wires in the middle of the chamber alternately shifted by ±0.75 mm parallel to the drift direction to solve right-left ambiguity. The distance between the sense wires is 10 mm. There are two guard wires to remove the edge effect. The sense and guard wires are surrounded by the cathode wires. A uniform electric field is created in the drift gap by fieldforming wires with a uniformly distributed potential. The chamber gas volume is limited by the aluminium alloy case, 1.5 mm thick, which serves simultaneously as an electric screen and the chamber frame. The chamber ends are limited by plexiglass plugs, where wires, gas inlet and outlet, amplifiers, high voltage supply circuit and cable connectors are assembled. Blind holes in the plugs are provided for the installation of the drift chambers so they are firmly isolated from the setup framework. Due to a nice uniformity of the electric field inside the chamber, electron drift velocity can be assumed to be constant and we can use a linear relation between drift time and coordinate. Configuration of signal wires enables reconstruction of a projection of particle track to the plane orthogonal to the wires. Thus, for reconstruction in space at least two non-parallel drift chambers are required. The chamber is filled with a gas mixture of Ar (94%) and CO2 (6%) at a small overpressure of about 10-20 mbar. The spatial accuracy of the IHEP drift chamber is 0.6 mm and its angular resolution is about 0.03 rad. Right-left ambiguity is resolved in 98% of events. The signals from the wires are processed by an on-board amplifier-shaper that forms 75-100 ns pulses (depending on the input signal) in the LVDS levels. Placement The coordinate-tracking unit based on drift chambers (CTUDC) consists of two vertical coordinate planes installed at different sides of the CWD in the short galleries of the third floor of the NEVOD building, one floor above the DECOR supermodules ( Fig. 1). Such a location allows the registration of near horizontal tracks by CTUDC (triggered by CWD) or also by joint operation with DECOR, that will significantly increase the range of muon track zenith angles from 85 • -95 • to 80 • -100 • . Each plane consists of 8 drift chambers installed in two rows, overlapping by 30 cm to exclude dead zones in the chamber ends; it causes a 4 • angle between the planes and CWD wall. The chambers are mounted on a special frame ( Fig. 1) that allows precise adjustment of the DCs in all directions. The distance between the registration system of the setup and the chambers is different for planes. It causes the difference in the length of cables and requires additional commutation blocks. The effective area of the plane is 14.8 m 2 , the area of two DECOR supermodules located a floor below is 17.5 m 2 , so the total area of the coordinate detectors for registration of near-horizontal particles coming along the CWD almost doubles. Power supply The drift chambers are supplied with two high voltages: 12 kV (field-forming wires) and 2.2 kV (signal wires), and with ±6 V low-voltage. The high voltage is provided by the multi-channel HV power supply controlled by a PC via a USB bus. The software of the CTUDC controls the current on the wires providing a smooth rise of the voltage during turning on the supply. A special microcontroller gets commands from software and turns on a relay to enable a light alarm outside the gallery of CTUDC. The most dangerous process is turning on/off 12 kV, since it causes electrostatic potential on the chambers. The light alarm flashes at 1 Hz during such process until the current stops rising (it always glows at a steady voltage). Such capabilities of the supply allow getting reliable noise characteristics of the chambers, because every point in such a measurement should be obtained with a steady current [7]. Two linear low-voltage supplies provide ±6 V to amplifiers situated on the DCs, their total consumption is about 6 A. Gas supply The gas preparation system is situated on the first floor of the building. Iit consists of two gas tanks (Ar and CO 2 ), ramp, two controlled flow meters, gas mixer and the PC, which manages the modes of gas supply. The mixture is supplied to the chambers via PVC tubes. Software of the system allows tracking of volumes of gas passing through chambers and to turn off the flow if one component of the mixture is exhausted. The CTUDC has a sequential gas tube connection, but every chamber or a group of chambers can be connected separately to the gas system. Registration system CTUDC is designed for joint operation with the triggering system of the experimental complex NEVOD (NEVOD TS) that binds registration systems of CWD, DECOR, calibration telescope system (CTS) and array of neutron detectors PRISMA. The triggering system has a rather fast data handling: the period between the passage of a particle through the working volume of CWD and the trigger formation is about 500 ns. On the other hand, the maximum drift time of electrons in the drift chamber is 6 µs, so the registration system and DAQ of CTUDC cannot be directly integrated into the NEVOD TS and should be implemented separately with the possibility of off-line inter-connection between the NEVOD and CTUDC data. The registration system of CTUDC (Fig. 2) consists of a CTUDC main PC (MPC), VME crate with an optical bridge and a 128-channel time-to-digital converter (TDC) CAEN V1190. The TDC resolution is 100 ps while the time resolution of the drift chambers is about 5 ns. The internal memory of the device allows storing large amount of events to the moment of TDC readout. The ability to work in a continuous mode allows measuring signal and noise rate per-channel with more than 90% live time. The timing diagram of the CTUDC registration system is shown in Fig. 3. The NEVOD TS fixes an event if one of the experimental complex setups is triggered: CWD, DECOR, CTS and PRISMA. Each setup has its own trigger conditions. For CWD it is lighting up more than 60 measuring modules [8], for DECOR it is triggering at least two supermodules. For CTS there are three conditions: triggering of several detectors of the top plane, triggering of several detectors of the bottom plane, triggering of a single detector in each plane. For PRISMA it is triggering of two neutron detectors by charged particles (the threshold corresponds to the simultaneous passage of five particles within the scintillator detector). The NEVOD TS gathers information from all setups for every event, no matter if they gave no trigger. In the main mode of operation, the trigger for CAEN TDC should be received after all hits of the event, so the signal from NEVOD TS is delayed for 8 µs. The matching window of TDC is chosen to be certainly longer than the delay value, so it is 12 µs. This leads to the fact that the starting point of counting the electron drift time is different from zero in the TDC data. The position of that point in the data is called the offset. In fact, it consists of two parts: basic offset for all channels (∼1.5 µs) and individual offset caused by the difference in the length of the signal wires of the right and left planes of CTUDC and different twisting step in them. For several types of events, MPC of NEVOD sends a network packet via Ethernet with information about the last event for further matching of CTUDC and NEVOD TS data, the most important information in the packet is the event number. Software CTUDC software is divided into server and client programs. The first one is constantly running on a MPC of CTUDC, it provides exposition, controls VME crate with TDC, HV supply and light alarm. The client can be launched on multiple computers in a shared server network; it can control modes of the exposition and the high voltage supply. It is also used as a program for remote monitoring of the setup. Communication between applications is held via UDP TCP/IP Protocol in a synchronous mode. Conflicts between client programs for the server time were not observed. Because of the relatively high frequency of events in the NEVOD TS, network packets cannot be sent for each event that could result in an overload of the internal network, but only for two types of events: triggering of one supermodule DECOR in each short gallery, and triggering of CWD and DECOR at the same time. The total frequency of such events is around 1 Hz. After the reception of the network packet a readout of TDC events occurs. Then the server software compares the number of read events with the difference in the event numbers in the last two network packets. If these numbers match, then all read events are assigned with a corresponding intermediate number. Otherwise, this set of events is logged in a special file for individual processing. It happens on average to 4% of events. Currently, a new mode of exposition is being tested: when the difference in numbers is found the system waits for the third network packet and then the comparison is performed for the events between the first and the third package. This reduces the number of dropped events to 0.5%. Every 20 minutes, NEVOD is monitored for 30 seconds, the MPC of CTUDC receives a network packet at the beginning of this process and software starts the 30 seconds measurement of noise from drift chambers. The results are stored on the server that allows to monitor the performance of the DCs and to make corrections for specific pieces of data. Processing of CTUDC data Primary processing of CTUDC data includes calibration of drift chambers (forming accurate configuration files for further processing) and passportization of experimental runs. Calibration The calibration of the unit is performed after every replacement of drift chambers and amplifiers in them. It includes the correction of data, the position of the drift chambers and the selection of individual offsets for the measuring channels. A spatial calibration is performed by cross-calibration of CTUDC with DECOR supermodules [9]. The position of the drift chambers is varied by software until the position corresponding to the best matching of tracks in both systems will be found. The determination of individual offsets for each wire is performed according to calibration series. Statistics for one channel in such a run is about 100000 hits. Figure 4 shows the distribution of hits in drift time per channel. The background of the distribution is due to noise; the duration of the main part of the distribution is 5.5-6 µs, which corresponds to the maximum drift time of the electrons and depends on the current state of the gas mixture in the chamber. The position of the left edge of the distribution corresponds to the individual offset of the channel. However, this definition cannot be considered absolutely reliable because of the rather large value of bins (about 10 ns), and due to the large influence of the jitter of the receipt of the trigger signal. Therefore, precision accuracy in the value of the individual offset is determined by a sequential search method. For each chamber about 10,000 single tracks are selected to a separate file. Individual offsets of the channels are varied in increments of 2.5 ns relative to each other and for each combination software makes the reconstruction of all tracks using the least squares method. The algorithm chooses the combination in which the number of tracks with the sum of squares of deviations of the track from the experimental points is less than a pre-selected threshold (10 mm 2 ) is maximized. The basic offset is obtained by the same method, but it uses events nearly perpendicular to the chamber plane, the criterion is smoothness of the distribution of the coordinate of intersection of reconstructed track with the drift chamber plane. If the offset is chosen incorrectly, the peak or a gap will occur in the area corresponding to the center of the chamber in such distribution. Determination of current chamber characteristics The value of the drift velocity is determined individually for each chamber in each experimental run and is texted in the passport of the given run. Not only is the width of the distribution of the drift time important, but also its form. During degradation of the gas mixture the distribution of the hit drift times (Fig. 4) changes its form: it falls in the region of large drfit times and looks trapezoidal, i.e., the efficiency of the chamber drops for the registration of particles that have passed far from the center of the chamber. Such defective chambers should be flushed with 600 liters of the gas mixture. The average time of operation of the chamber without additional venting is about 3 months. One of the most important characteristics of the drift chambers is the efficiency of the signal wires. Efficiency is defined as the ratio of the number of events without the participation of the signal wire to the sum of this value with the number of full events (with all triggered wires) in DC. It was 99.95% at the test bench. In the experiment, this value is 99.3% on average. The decrease in the efficiency is due to the increase of the threshold shaper-amplifiers, which was required because of a higher level of electromagnetic interference in the CTUDC in comparison with a test bench. The threshold level for the amplifiers is set so that the noise rate on the chamber channels does not exceed 2 kHz. Passportization To handle joint events of CTUDC and NEVOD TS the offline linking of events is carried out. Joining is made in several stages. The first step is to separate the CTUDC data on the parts corresponding to the experimental runs of the NEVOD TS (approximately 40 hours of exposition). Then DECOR tracks are reconstructed in NEVOD data. The third stage is forming of a single file with the events that have a structure given in Table 1. The passportization of the experimental runs is carried out after joining. The passport includes information about the correspondence of events, efficiency of the measuring channels and statistics on the frequency of different classes of events in CTUDC. The drift velocities are also specified and are later used in event handling. First results For the moment, more than 2500 hours of debugging and measuring series have been held at the CTUDC. According to their data, a cross-calibration was made with the coordinate-tracking detector DECOR and the individual offsets for the measuring channels were determined. Figure 5 shows the distribution of events according to the setups that triggered NEVOD TS. The graph shows that the least likely response in CTUDC is accompanied by events from CTS and DECOR (usually these are single muon events). Most of the events from PRISMA are usually accompanied by CTUDC response. Figure 6 shows the same distribution for the average number of triggered events in DC at different trigger types. Events of the PRISMA setup typically have a greater density of charged particles (due to a high threshold on the detectors). In such events, on average, up to 7 drift chambers have tracks. On the other hand, events with CTS trigger are characterized by a large difference between the number of chambers with full tracks and chambers having at least one hit. This suggests that most of the hits on this trigger are accidental (noise). The setup is able to register single muons and muon bundes with density up to 10 particles per m 2 . Figure 7 shows the distribution of registered events in track multiplicity. Processing of the data uses three methods of reconstruction of multiple events: the sequential search method, the method of finding the straight line and the histogram method. The last method determines the number of tracks under the same zenith angle in the event. It is most ISVHECRI 2016 suitable for the determination of multiplicity: because of the remoteness of the points of generation of muons, their trajectories are quasiparallel. The influence of the dead time of measuring channels (∼100 ns) and the number of secondary particles produced in the concrete walls of the NEVOD building are also taken into account in determining the number of particles in the bundle. Figure 8 shows the distribution of events according to the value of the angle between the projection of the track on the plane othogonal to signal wires and a plane passing through them. The 90 • angle corresponds to vertical tracks, the zero angle corresponds to tracks with zenith angle of 90 • and perpendicular to the DC plane. Figure 9 shows the same distribution for multiple events. Both distributions have maxima in the region of 20 • , that is due to the fact that the events are selected according to the trigger from CWD and DECOR so it means that particles moved horizontally through the chambers to the water volume or in the opposite direction. Conclusion The new coordinate-tracking unit on the drift chambers (CTUDC) is developed in MEPhI. The installation greatly improves the capabilities of the experimental complex NEVOD in the registration of near-horizontal extensive air showers. The setup successfully passed adjustments and calibration stages and is currently gathering experimental data.
2019-04-22T13:06:45.364Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "59b21989aca2551df2bd08735e1651690b6a2b2c", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/14/epjconf-isvhecri2016_07002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dc3ed3149dd9ed967f81afc020aa286c9763a27f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12293084
pes2o/s2orc
v3-fos-license
A clinical study of the effect of calcium sodium phosphosilicate on dentin hypersensitivity Objective: Dentinal hypersensitivity is a commonly encountered problem with varied treatment options for its management. A large number of home use products have been tested and used for the management of dentinal hypersensitivity. This 8 week clinical trial investigates the temporal efficacy of commercially available calcium sodium phosphosilicate containing toothpaste in comparison to a potassium nitrate containing toothpaste. Methods: A total 20 subjects between the ages of 18 to 65 years were screened for a visual analogue score (VAS) for sensitivity of 5 or more by testing with a cold stimulus and randomly divided into test and positive control groups. Baseline sensitivity VAS scores to air evaporative stimulus were recorded for minimum two teeth. The subjects were prescribed respective dentifrices and revaluated for sensitivity scores at 2, 4 and 8 weeks. Results:The study demonstrated reduction in symptoms for all treatment groups from baseline to 2, 4 and 8 weeks. The calcium sodium phosphosilicate group showed a higher degree of effectiveness at reducing hypersensitivity to air evaporative stimulus at 2 weeks, than commercially available potassium nitrate. However, there was no significant difference in scores of subjects using the calcium sodium phosphosilicate toothpaste as compared to potassium nitrate at 4 weeks and 8 weeks. Conclusion: Calcium sodium phosphosilicate showed greater reduction in sensitivity compared to potassium nitrate at an earlier stage which is of high clinical value. However, based on the findings of the present study long term effects of calcium sodium phosphosilicate seem to be less promising than previously claimed. Key words:Dentinal desensitizing agents, dentinal hypersensitivity, toothpaste, pain measurement. Introduction Dentine hypersensitivity (DH) is the term used to describe common, painful condition of the permanent teeth; the etiology of which, however, is still poorly understood and the various mechanisms have been proposed to explain the development of dentinal hypersensitivity and treatment alternatives have been reviewed (1-2). Most accepted of these is the hydrodynamic theory which was first explained by Gysi in 1900 and the experimental evidence for which was provided by Bränström. According to this theory, the movement of dentinal fluid on stimulation with thermal, chemical, evaporative or electric stimulus is responsible for excitation of the underlying dentinal mechanoreceptor resulting in sensitivity. Thus many treatment options aim at achieving dentinal tubule occlusion to prevent dentinal fluid movement, thereby reducing hypersensitivity. The difficulty found in treating DH is expressed by the enormous number of techniques and therapeutic alternatives to relieve it (3). Several methods and materials have been tried to reduce dental sensitivity, ranging from home-use, over the counter products such as desensitizing mouthwashes, dentifrices or tray application foams to in-office application products such as varnishes, liners, restorative materials, dentinal adhesives iontophoresis procedures and more recently, lasers. -Home use products-dentifrices The home use products are most realistic and practical means of treating most patients with mild to moderate dentine hypersensitivity and they generally form the first step in routine management. Among these, desensitizing dentifrices have established themselves as principal home care therapeutic agents owing to the fact that they are readily and widely available, cost effective, simple to use and non-invasive and the habit of tooth brushing being almost universal (4). A large number of agents have been shown to reduce the tubule diameter by precipitation of crystals and also shown to be clinically efficacious. -Potassium nitrate Potassium nitrate containing dentifrices have consistently shown benefit in alleviating the hypersensitivity symptoms (5-7). They mainly act by blocking the neural transmission. A review by Kanapka concluded that potassium nitrate use reduces hypersensitivity in 8-12weeks (8). However as Porto et al. (3) have pointed out that in spite of a large amount of literature it is not possible to reach a consensus about a product that represents the gold standard in the treatment of dentinal hypersensitivity. Nevertheless, a large number of commercially available dentifrices consist of potassium nitrate. -NovaMin technology NovaMin was developed in the 1990s as a modification of bioactive glass which then had proven to be useful in bone regeneration and repair. It was found that NovaMin also reacted with the tooth dentin and was modified by grinding the particles, to obtain particles that were small enough to access the dentinal tubules. Each microscopic NovaMin particle serves as a delivery system for these ionized bioactive particles. When the particle is exposed to fluid: saliva or tap water, it instantly reacts, releasing mineral ions of calcium (Ca) and phosphate (PO 4 ) that augment the natural remineralization process. As the particle reactions continue and the deposition of Ca and PO 4 complexes continue, this layer crystallizes into hydroxycarbonate apatite which is chemically and structurally equivalent to biological apatite. The combination of the residual NovaMin particles and the newly formed hydroxycarbonate apatite layer results in the physical occlusion of dentinal tubules, which will relieve hypersensitivity. In-vitro analyses have shown significant occlusion of tubules with the NovaMin compounds (9, 10). Each NovaMin bioglass particle is made of Calcium sodium phosphosilicate with 25% sodium, 25% calcium, 6-8% phosphate and remainder, silica. Previous studies (10-12) have shown clear benefit of NovaMin containing dentifrice over controls. Also a recent review (13) concluded that the data look promising, but more research is needed. The present study intends to compare the temporal efficacy of a calcium sodium phosphosilicate containing toothpaste (Vantej ® Dr Reddys's Labs, Hyderabad, India) with potassium nitrate (Sensodent-K ® , Warren Pharmaceuticals, Mumbai, India) containing tooth-paste both of which are commercially available. Material and Methods This study was a single center, randomized, double blind and parallel group clinical trial. It was conducted in Department of Periodontics at this institution in accordance with the Declaration of Helsinki and Guidelines for Good Clinical Practice. The study duration was 8 weeks, in which sensitivity scores were measured at baseline, at 2 weeks, 4 weeks and at 8 weeks. After ethical approval, subjects were selected from the outpatient section of the Department of Periodontics. Duration of the study was from October 2009 to August 2010. -Inclusion Criteria: Patients need to have at least two sensitive perma-1. Sensitive tooth surfaces are selected if they have 2. wasting diseases and/or gingival recession. No history of periodontal therapy in the past one 3. year. distance of 1 to 3 mm from the exposed dentin surface of the test teeth which were duly isolated while adjacent teeth were protected with gloved fingers. VAS scores were recorded at baseline. Phase I periodontal therapy (scaling) was instituted for all the subject and patients were provided with the respective dentifrice. The subjects were instructed to brush for 5 minutes, twice daily throughout the period of their study and asked to refrain from consuming very hot, cold, sweet or sour food or drinks. Subjects were also directed to refrain from any other dentifrice or mouthrinse during the trial but were allowed to continue their normal oral hygiene practice. Assessment was performed again at 2, 4 and 8 weeks. -Statistical analyses: Mean VAS scores and mean± S.D. were calculated from individual scores from all subjects in a treatment group. Student paired t-test was used to find out the difference between baseline, 2 weeks, 4 weeks and 8 weeks scores of each group. Mean scores were compared among groups at baseline, 2 weeks, 4 weeks and 8 weeks using One Way Analysis of Variance (ANOVA) to find out the difference between the test and control group with baseline scores. The data of test and control group were compared between 2 weeks, 4 weeks and 8 weeks using Analysis of Co-Variance (ANCOVA) with baseline as covariate. The percentage reduction from baseline-2 weeks, baseline-4 weeks and baseline to 8 weeks was compared between the two groups using Students unpaired t-test. -Baseline demographics: A total number of 20 subjects were followed up for a period of 8 weeks. An ANOVA of baseline sensitivity indicated no significant difference effects for the groups for air evaporative stimulus. Since the baselines cores for both groups were similar and did not show any significant differences, these scores were used as a covariate for the ANCOVA. Significant Improvement Compared to Baseline. Paired t-tests for each group were carried-out comparing sensitivity at time points two, four and eight weeks to baseline. The trend was increasing reductions in dentin hypersensitivity over time for both test and the positive control group over time. The percentage reduction in sensitivity scores was 40.34%, 57.98% and 75.63% Subjects with orthodontic appliances or bridge work 2. that may interfere with evaluation. Medical (including psychiatric and pharmacothera-3. peutic) histories that may compromise study protocol. Allergies. Any dental treatment which may have an effect on 8. the desensitizing agent being used. Any other pathology. 9. Known history of allergies to dentifrice contents. 10. Systemically healthy subjects of both genders, between the ages of 18 to 65 years, who were well versed with the use of toothbrush and dentifrice for oral hygiene maintenance, were considered for the study. Informed consent was obtained from the compliant subjects after explaining the rationale and purpose of the study. Diagnosis of dentinal hypersensitivity was based on patient's primary complaint and detailed history of the same regarding subjects' perception of sensitivity to thermal stimuli (hot or cold), sweet or sour foods, drinks and to tooth-brushing. Other causes of dental pain (caries, periodontal pain) were ruled out during clinical examination. Teeth included in the study had no caries restorations. To assess tooth sensitivity, a cold pack test of sensitive areas was performed using ice application. Sensitivity was measured using a 10cm visual analog scale (VAS) score, with the score of 0cm being a no-pain response, score of 5cm was perceptible discomfort and a score of 10 being extreme pain or discomfort (modification of Hurkissons VAS, 1974). The clinical examination and sensitivity tests were carried out by a single examiner. The two toothpastes compared were commercially available non-aqueous toothpaste containing 5% calcium sodium phosphosilicate (test) and commercially available toothpaste containing 5% potassium nitrate (positive control). The present study employed a double blinding procedure to eliminate subjective bias. The brand names from the tubes were painted over with a uniform color by a third person. Patients reporting a grading of 5 or more of at least 2 teeth (buccal/facial aspects of incisors canines and premolars) were included in the study and designated into Group 1, receiving the toothpaste containing 5% calcium sodium phosphosilicate (tube painted red), and Group 2 receiving the toothpaste containing 5% potassium nitrate (tube painted black) . Scoring of tooth sensitivity was carried out by using controlled air pressure, from a standard dental syringe ambient temperature, directed perpendicularly and at a at 2, 4 and 8 weeks respectively for the test group and 24.79%, 47.86% and 64.96%. at 2, 4 and 8 weeks respectively for the positive control group (Table 1). -Comparison between groups: Using an ANCOVA, with data from two weeks, four and eight weeks as dependent variables and baseline values as covariates, the sensitivity scores demonstrated a significant difference among groups at a period of two weeks (p= 0.0207). However there was no significant difference between groups with respect to reduction of scores at 4 and 8 weeks (p values 0.1146 and 0.1152 respectively) ( Table 2). Discussion The objective of this study was to evaluate the temporal efficacy and safety of a new commercially available desensitizing dentifrice formulation containing calcium sodium phosphosilicate toothpaste (Vantej ® ) and compare it to a commercially available potassium nitrate containing toothpaste (Sensodent K ® ). Air evaporative stimulus was used to evaluate sensitivity as it can be easily controlled over isolated teeth and simulates the physiologic stimuli. Study duration for this trial was 8 weeks as is recommended by Holland et al. (14). A review of literature by Gendreau et al. (15), based on randomized controlled clinical trials, support the use of NovaMin in toothpaste formulations in providing relief of pain from dentin hypersensitivity. The results of the present study demonstrate reduction in symptoms for all treatment groups from baseline to 2, 4 and 8 weeks for measures of sensitivity. There was a remarkable pattern toward reduction of DH with time for all the variables during the 8 weeks of active phase of the study independent of treatment groups. This is in agreement with the studies by Salian et al. (10), Pradeep and Sharma(12), Sharma et al.(16), West et al. (17) and by Litkowski and Greenspan (18). The calcium sodium phosphosilicate group showed a higher degree of effectiveness at reducing hypersensitivity, than commercially available potassium nitrate for sensitivity, to air evaporative stimulus at 2 weeks. This is in accordance with the results of previous studies which showed that calcium sodium phosphosilicate is more effective compared to potassium nitrate in reducing sensitivity scores as measured using the visual analogue scale (10,12,16). Calcium sodium phosphosilicate toothpaste thus may show greater benefit at an early stage as compared to potassium nitrate (16) which is also advocated in the pilot study by Narongdej et al. (19). A comparative study by Parkinson and Willson (20) concluded that calcium sodium phosphosilicate imparts significant level of dentinal occlusion with durable occlusive deposits following four days of twice daily brushing in vitro. However in contrast to the above mentioned studies, there was no significant difference in scores of patients using the calcium sodium phosphosilicate toothpaste as compared to potassium nitrate at 4 weeks and 8 weeks. This could possibly be due to the fact that both the active agents have been supplied using dentifrice as a delivery vehicle and the excipient (non-active) agents in the dentifrice may serve to occlude dentinal tubules over time, though a previous study has failed to show tubule occlusion with potassium nitrate containing dentifrice (10). Also this effect could be related to a natural decrease in dentin hypersensitivity overtime, or because of patient perception of a decrease in symptoms by virtue of par- ticipation in a clinical trial, or may be due to placebo products actually providing some degree of relief from dentin hypersensitivity. The 5% potassium nitrate toothpaste was used as a positive control in our study because it has proved to be clinically efficient in the treatment of DH (5-8). In our study, the potassium nitrate group showed significant percentage reduction in sensitivity scores but reduction compared to the calcium sodium phosphosilicate group was less at 2 weeks. However, it was found to be as effective as calcium sodium phosphosilicate after 4 and 8 weeks. The results of the present study may have to be extrapolated with caution given the small sample size and lack of accounting for the placebo effect and the Hawthorne effect. Because calcium sodium phosphosilicate showed greater reduction in sensitivity compared to potassium nitrate at an earlier stage, the direct implication of the present investigation would be faster relief which is of great clinical importance, given the acute nature of dentinal hypersensitivity. Based on the findings of the present study, long term effects of calcium sodium phosphosilicate seem to be less promising than previously claimed.
2016-05-13T23:22:34.469Z
2013-02-01T00:00:00.000
{ "year": 2013, "sha1": "a28698bec567d267e488975c017fdd1266ad4cd2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4317/jced.50955", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6a751282f5acdba210f482c06fd03d11e1c7731b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214721926
pes2o/s2orc
v3-fos-license
A simplified calculation of the correlations between relatives The correlations between relatives is one of the fundamental ideas and earliest success of quantitative genetics. Whether using genomic data to infer relationships between individuals or estimating heritability from correlations of phenotypes amongst relatives, understanding the theoretical genetic correlations is a common task. Calculating the correlations between arbitrary relatives in an outbred population, however, can be a careful and somewhat complex task for increasingly distant relatives. This paper introduces an equation based method that consolidates the results of path analysis and uses easily obtainable data from non-inbred pedigrees to allow the rapid calculation of additive or dominance correlations between relatives even in more complicated situations such as cousins sharing more than two grandparents and inbreeding. Introduction One of the key achievements of the Modern Synthesis in evolutionary biology was the determination of the expected degree of genetic relationship between relatives given a known pedigree. First rigorously defined and expanded in Fisher's landmark 1918 paper [Fisher 1918], the correlation of relatives became one of the first applied tools of quantitative genetics valuable for selective breeding and analysis of complex genetic disorders in families. Important contributions by Sewall Wright, especially path analysis [Wright 1922, Wright 1934] and Gustave Malécot [Malécot 1948, Malécot 1970] helped create the modern methods and notations to investigate the correlations between relatives for loci with additive or dominance contributions to genetic variance. R. Smith Supreme Vinegar LLC, 3430 Progress Dr., Suite D, Bensalem, PA 19020 Tel.: +1-215-633-9355 E-mail: rsmith@supremevinegar.com Further work on the correlation of relatives focused on the correlations due to epistatic effects [Kempthorne 1954, Cockerham 1954, Kempthorne 1955 including adjustments for linkage and linkage disequilibrium [Cockerham 1956, Gallais 1974. While the correlations amongst relatives is well-known for almost all typical situations, the methods of calculating these correlations remain algorithmic and recursive. In this paper, an exact equation using the information from the structure of a known pedigree will be used to demonstrate a consistent mathematical formulations for the additive and dominance genetic correlations between relatives for all situations. The relation of correlation and kinship to general pedigree variables While several methods of calculating the correlation of relatives are known, almost all are recursive requiring the tracing of paths between relatives or calculating multiple types of identity coefficients in order to determine the relationship between two related individuals [Wright 1922, Gillois 1965, Jacquard 1972, Karigl 1981, Lange 2012. However, once the correlations for various sets of relatives have been determined by analysis, it is possible to consolidate them into a general equation which combines previously recognized insights and understandings for different relative pairs. Using notation similar to the original notation from [Wright 1922], for two relatives, X and Y with a set of most recent common ancestors A, let us define three variables: G XA , the number of generations of descent from any member of A to X and G Y A , the number of generations of descent from any member of A to Y . G XA and G Y A are n and n ′ in [Wright 1922]. For half-sibs or half-cousins, one additional path is required through the parent of the halfsib/cousin so also define H(X, Y ) as a binary variable that is one if either X or Y are half-siblings or half-cousins and zero otherwise. Define as C, the number of elements of A which designates the number of common ancestors in the generation of A. For typical pedigrees C = 1 for direct descent (parents-offspring or grandparents-grandchildren) or C = 2 for other relatives. However, when cousins share more than two ancestors in A, such as double cousins sharing four grandparents, C can increase up to all ancestors for both relatives in A where the maximum value is reached at C = 2 G XA . Finally, also including the possibility that the ancestral generation is inbred with coefficient F A , we can state the additive correlation between relatives X and Y in logarithmic form: In the typical case where C = 2 and F A = 0 this reduces to For cases where X and Y are inbred In Table 1, the actual and compared values of − log 2 r A XY are shown for several situations where C = 1 or C = 2. Therefore using equation 1 one can quickly and accurately determine the correlation between relatives. The kinship coefficient is defined as Complex cousins The more complicated situation arises when cousins have more than two relatives in common in the generation of their most recent common ancestor. For example, double first cousins share all four grandparents since each of their parents are from one of two pairs of full siblings. "Sesqui" first cousins share three grandparents due to one parent coming from a pair of full siblings and another coming from a pair of half siblings. Except double cousins, the terminology for such cousins is not universally standardized. For purpose of analysis, however, normal cousins of any degree should only share two relatives in the generation of their most recent common ancestors. Those that share more have an increased level of relationship that is reflected in their additive or dominance correlations. Using equation 1 we can derive the correlations for a variety of half or full cousins sharing more than one ancestor in the generation of A. Having the ability to derive r A XY , we can additionally calculate the dominance covariance and correlation in tandem with certain insights about the probability of sharing pairs of alleles that are identical by descent. Malécot used the insights of Fisher and Wright to derive the following expression of the total covariance between two relatives In equation 4, σ 2 A and σ 2 D are the additive and dominant variance respectively, and ϕ and ϕ ′ are the correlations between each pair of loci in two relatives as to whether they both are represented by an allele that is identical by descent. For cousins, ϕ can be determined based on the number of grandparents shared between cousins. For first cousins for example, ϕ is always 1/4 even if two, three, or four grandparents are shared by cousins; the exception being half first cousins where ϕ is 1/8. For more distant cousins, however, ϕ increases in multiple groups of four grandparents shared. For example, in second cousins, for two to four great-grandparents shared, ϕ is 1/16 (at 1/32 for half second cousins), while for five to eight great-grandparents shared it is 1/8. It proceeds similarly for increasingly distant cousins. For (non-removed) cousins that share common ancestors G = G XA = G Y A generations in the past, ϕ can be calculated as The function ceil in equation 5 is the ceiling function which rounds up to the smallest integer greater than or equal to C/4. Once we have ϕ, we can calculate the dominance correlation between cousins, r D XY = ϕϕ ′ using ϕ ′ = 2r XY − ϕ. Table 2 shows how various degrees of relations can effect the additive and dominant correlations between different first through third cousins. Epistatic Covariances The decomposition of the covariances of relatives who have epistatic genetic effects amongst loci in linkage equilibrium was given by [Kempthorne 1954, Kempthorne 1955,Cockerham 1954. In short, for additive epistatic effects of two (AA), three (AAA), or n (A n ) loci, the covariance is given by This can be represented in its effects by Table 2 The additive and dominance correlations between various first, second, and third cousins using equations 2 and 4 where C is the number of shared relatives in the generation of the most recent common ancestor. Of note are double cousins where G X = G Y = 2 and C = 4 which share the same four grandparents and double half cousins which share the same variables except H(X, Y ) = 1. where n = 1 is the base case with no epistasis considered. In full siblings or cousins with dominance correlation, for n loci in linkage equilibrium that have m loci with additive epistasis and n − m loci with dominance epistasis the epistatic covariance for X and Y is Results and Discussion This paper does not present any new results regarding the kinship coefficients or correlations between relatives themselves. This is already a path well worn and verified by both simulation and increasingly, genomic data of large pedigrees. However, it does demonstrate that the correlations between arbitrary relatives can be quickly and easily calculated in a non-recursive fashion taking into account a few aspects of the pedigree tree for the relatives in question. While with modern computers, computational overhead is not much of an issue, and tables of kinship coefficients abound, this method may provide a shorthand for both researchers and students of quantitative genetics to ease the process of pedigree analysis.
2020-03-12T10:39:54.081Z
2020-03-08T00:00:00.000
{ "year": 2020, "sha1": "6f02898ff3e373d3201a5d4919bd3c7e3840b290", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/03/08/2020.03.05.978361.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "d248ea205531c8347b9f73be8db48af24bee15b5", "s2fieldsofstudy": [ "Biology", "Mathematics" ], "extfieldsofstudy": [ "Biology" ] }
270556796
pes2o/s2orc
v3-fos-license
Massive aggressive angiomyxoma of ischioanal region with relapse: A case report Abstract Background Aggressive angiomyxoma (AA) is a rare and slow-growing tumor in the pelvic and perineal regions that might develop into other perineal structures. It can present variably, ranging from a painless mass to non-specific symptoms such as dyspareunia. Due to the high relapse rate, extensive tumoral resection is reasonably required to prevent recurrences. It is also commonly confused with other conditions such as lipomas, Bartholin's gland cysts, and hernias. Objective A 43-yr-old female diagnosed with AA 10 yr ago was evaluated as a consequence of the tumor recurrence. She presented rare manifestations of a giant and cystic pelvic mass involving pararectal and paravaginal tissue in front of the sacrum. Case Presentation Although AA is a rare and slow-growing tumor, close observation is recommended due to the high relapse rate. Furthermore, extensive tumoral resection and regular follow-up can reduce morbidity in these patients. Introduction Aggressive angiomyxoma (AA) is an uncommon and slow-growing vulvovaginal mesenchymal neoplasm categorized as an undifferentiated tumor based on the World Health Organization classification (1).Women experience a higher prevalence of the condition, with a ratio of 1 female to every 8.5 males affected (2).AA typically originates from the genital area in women of reproductive age.Although this tumor is known for its limited tendency to metastasize, it exhibits aggressiveness by infiltrating nearby structures (3).The primary reasons for morbidity associated with this condition are recurrence and local invasion, which have reported in 35-72% of cases (4).AA usually appears as a vulvar polyp and is diagnosed histologically.It is also commonly confused with other conditions such as lipomas, Bartholin's gland cysts, and hernias (5,6). We herein discuss a rare case of AA in a 43-yr-old female with unusual recurrence involving the pararectal region that required extensive resection.Considering the high recurrence rate, close follow-up was recommended, but after 2 yr, she voluntarily withdrew from the follow-up sessions. Case Presentation She has had symptoms of gluteal asymmetry and a mass in her left area for the past 2 yr. In addition, she complained of severe vulvar discharge, constipation, and polyuria.On examination, a 5-10 cm mass was found in the left gluteal region, causing gluteal asymmetry. On palpation, the mass was soft without tenderness.On inspection, no ulcer or visible discharge were observed.Rectal examination revealed an extra luminal pressure effect that was mobile.On vaginal examination, the effect of mass pressure was detected on the left lateral side of the vagina, but there was no lesion in the vagina or rectum. Ethical considerations Oral consent was obtained from the patient for the publication of this case report. Discussion Our case report highlights a unique presentation of AA with initial occurrence at the left labia majora, followed by a late recurrence in the pelvic region posterior to the uterus, extending towards pararectal tissue and the vagina. AA is a rare and slow-growing lesion that predominantly impacts the perineal and pelvic regions, mostly identified in women during their reproductive age (7).It can present variably, ranging from a painless mass next to the labia majora to non-specific symptoms such as dyspareunia, regional pain, and a feeling of pressure from the lump (5).According to the literature, the most commonly affected area is the vulva; however, the retroperitoneum and bladder can also be involved, less commonly (8).As in our patient-the first occurrence was at the left labia major and the second recurrence was at pelvic from the posterior of the uterus with extension toward pararectal tissue and vagina. The tumor size varies from 1-60 cm (9).In our case, we reported the tumor size to be approximately 10 cm in the operation field.This tumor can be confused with vulvar masses such as vulvar lipoma and vulvar leiomyomatosis, vulvar abscess, Bartholin's gland cysts, Gartner duct cyst, vaginal polyp, vaginal prolapse, mesonephric duct cyst, vaginal hernia, instantaneous levator hernia, and other soft tissue neoplasms (5,6).This locally invasive tumor might develop into other perineal structures like the vagina, rectum, and bladder (8). In our patient, the second recurrence involved the rectum and vagina. Due to the high relapse rates with this tumor, extensive tumoral resection is reasonably required to prevent recurrences that occur in at least 30% of cases (10).Most recurrences happen within 5 yr following the primary surgery, with around 70% occurring in the first 3 yr.However, late recurrences as long as 14 yr after surgery have been reported (11), as our case had 2 recurrences after one year and 8 yr after the first resection of the tumor.In deep tumors, even resection of adjacent organs like the bladder or vagina may be considered (10).There is a rubbery consistency, on the gross section of this tumor, and the gelatinous surface contains hemorrhagic points (9).Histologically, the tumor consists of stellate and spindle-shaped neoplastic cells, distinguished by a myxoid stroma. There is no evidence of mitosis, pleomorphism, or coagulation necrosis but abundant vascular malformation, including vessels of different sizes (13).In our case, histopathological evaluation reported spindle-shaped cells, myxoid stroma, and proliferation of vascular bundles consistent with AA. Vimentin and CD34 are highly associated with this tumor, and desmin, estrogen receptor, and progesterone receptor are moderately linked to AA, but this tumor is negative for S-100 and CD68 (13).These findings indicate that AA will likely originate from mesenchymal cells with myofibroblastic and fibroblastic features.Based on the literature, AA usually has estrogen and progesterone receptors, which makes it reactive to gonadotropin-releasing hormone agonists (14).This can justify the high prevalence of this tumor among women of reproductive age.The first occurrence of AA in our patient occurred shortly after in vitro fertilization.Therefore, hormone suppression therapy was considered in our counseling sessions, and the patient underwent 6 months of hormone therapy with tamoxifen after the operations. Conclusion Since AA can be simply confused with other vulvar lesions like Bartholin's gland cysts, levator hernias, and other soft tissue neoplasms, any pelvic and vaginal mass among females must draw the attention to AA.However, pararectal recurrence is rare, and regular long-term follow-up can reduce morbidity in these patients. A 43-yr-old primiparous female came to the colorectal department of the Ghaem hospital, Mashhad, Iran, with a slow-growing large mass in the perineum around the rectum for 2 yr that caused the gluteal asymmetry.She had an ectopic pregnancy and in vitro fertilization around 11 yr ago.Her past medical history indicated that she had a mass on the left labia majora 10 yr ago and was diagnosed with a Bartholin cyst, for which she was treated conservatively.Due to the growth of the mass, computed tomography and MRI were done, and she underwent surgical removal of the mass through the perineum.The pathology of the lesion was AA.Then she underwent another surgery one year later, and the tumor was resected entirely since it had recurred. In routine laboratory investigations, severe anemia (Hb = 7.6 mg/dL) was detected, and a blood transfusion was administered.Abdominal and pelvic CT scans and a chest CT scan reported no abnormality or distant metastasis.Both the colonoscopy and the upper gastrointestinal endoscopy were normal.MRI revealed a gluteal mass with irregular, lobulated, and heterogenous borders situated posterior to the uterus, with the invasion of the soft tissue of the left perirectal and ischio-anal fossae.After injection, the mass had a heterogeneous enhancement and involved parts of the cervix and vagina.It was measured at approximately 65 mm × 70 mm, elongating to the urogenital fascia and deep soft tissue next to the left gluteus muscle and labia majora (Figure 1).Recurrent AA was the most suspected in the differential diagnosis of this mass, so she underwent extensive excision of the giant cystic tumor through the perineum.The tumor was located anterior and posterior to the sacrum, with severe adhesion to the surrounding tissue of the vagina and rectum.Therefore, resection with a safe margin was performed (Figure 2).Histopathological examination reported a mesenchymal neoplasm with extensive myxoid background and cellularity, consisting of bland cells and hyaline thick-walled vessels, consistent with AA (Figure 3). 4 days later, she developed a fever and a foul-smelling discharge from the wound.This caused her to return to the operating room again for a possible rectum injury.On reoperation, a rectal wall defect was observed where the mass had been removed.Subsequently, secondary healing of the rectum was successfully achieved through the next operations and colostomy insertion surgery.She was discharged in good condition, and the colostomy was closed after 4 months. Figure 1 . Figure 1.Pre-operation imaging A) Sagittal plane of pelvic MRI (T2-weighted MRI), indicating a large mass with longitudinal extension from the posterior of the uterus toward the ischiorectal and ischioanal spaces with a displacement of adjacent organs.B) Transverse plane of pelvic MRI indicates a left pararectal mass that causes displacement of the rectum to the right and is located behind the vagina.C) This MRI plane shows the involvement of the left side of the vagina (white arrow). Figure 2 . Figure 2. The white arrow (intra-operation field) indicates a left ischiorectal mass that extends the urogenital fascia upward. Figure 3 . Figure 3. A) Histopathological findings indicate proliferation of vascular bundles.B) This figure shows spindle-shaped cells and myxoid stroma (white arrow). Likewise, our case underwent a partial and massive elimination of the rectum owing to the invasion of the tumor.Surgery is usually the mainstay treatment option for AA.According to Wu et al., imaging modalities, including CT and MRI, play a crucial role in determining the extent of the tumor and locating the tumor before the operation, as there are complicated anatomical structures inside the pelvis (12).AA has many radiographic features.On CT, it demonstrates hypo-or isodense mass having a clear margin with less attenuation compared to muscle.Using enhanced CT or TI-weighted MRI, our report shows a swirled appearance with a tendency to invade other pelvic organs without muscle layer involvement.It is stated that both CT and MRI are diagnostic modalities.However, MRI with diffusion-weighted imaging has more specificity and superiority when it comes to establishing the precise location of the tumor and its adjacent structures (12).In our case, MRI revealed a gluteal mass with irregular, lobulated, and heterogeneous borders posterior to the uterus, with soft tissue invasion around the rectum.The mass had a heterogeneous enhancement after injection.Histopathology and immunohistochemistry are used to make a definitive diagnosis of AA.
2024-06-18T15:13:09.762Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "1d1aad01a36d16ec8a2c56790ac9b0d72505bdb9", "oa_license": "CCBY", "oa_url": "https://knepublishing.com/index.php/ijrm/article/download/16394/26137", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a5d4fe6615174cab803db52d55deaa839653d38", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14211496
pes2o/s2orc
v3-fos-license
Genotoxicity of Superparamagnetic Iron Oxide Nanoparticles in Granulosa Cells Nanoparticles that are aimed at targeting cancer cells, but sparing healthy tissue provide an attractive platform of implementation for hyperthermia or as carriers of chemotherapeutics. According to the literature, diverse effects of nanoparticles relating to mammalian reproductive tissue are described. To address the impact of nanoparticles on cyto- and genotoxicity concerning the reproductive system, we examined the effect of superparamagnetic iron oxide nanoparticles (SPIONs) on granulosa cells, which are very important for ovarian function and female fertility. Human granulosa cells (HLG-5) were treated with SPIONs, either coated with lauric acid (SEONLA) only, or additionally with a protein corona of bovine serum albumin (BSA; SEONLA-BSA), or with dextran (SEONDEX). Both micronuclei testing and the detection of γH2A.X revealed no genotoxic effects of SEONLA-BSA, SEONDEX or SEONLA. Thus, it was demonstrated that different coatings of SPIONs improve biocompatibility, especially in terms of genotoxicity towards cells of the reproductive system. Introduction Superparamagnetic iron oxide nanoparticles (SPIONs) have been widely investigated for many years now. Due to their exceptional magnetic, electronic and optical properties, they have turned out to be promising candidates for research and future use in an industrial or clinical setting. Especially for medical and scientific applications ranging from in vitro diagnostic tests, in vivo imaging, targeted drug delivery and tissue regeneration, SPIONs are capable candidates. In particular, SPION-based contrast enhancement in magnetic resonance imaging (MRI) [1], magnetic hyperthermia treatment [2] and magnetic drug targeting (MDT) [3,4] are of special interest in the therapy and diagnosis of cancer and other diseases [3,5]. Their incorporation into therapeutic drugs and their parallel use in imaging processes enables SPIONs to become "theranostic" agents. Additionally, the use of SPIONs in magnetic tissue engineering is a new concept in biomedicine [6]. SPIONs usually consist of iron oxide cores measuring 5-20 nm in diameter made of magnetite (Fe 3 O 4 ) and its oxidized form maghemite (γ-Fe 2 O 3 ). To increase their colloidal stability and biocompatibility, these iron oxide cores are coated with, e.g., long-chain fatty acids [7] or biocompatible polymers as chitosan or dextran [8]. Commercially accessible contrast agents like Sinerem, Resovist, Supravist and Ferridex have a surface coating of dextran or carboxydextran [9]. The formation of a surrounding protein corona also contributes to the stabilization and biocompatibility of iron oxide nanoparticles [10]. Because of the highly catalytic properties of nanoparticle surfaces [11,12], their coating may work as a barrier and could reduce their toxic potential. Especially for iron oxide nanoparticles, Fenton-like reactions caused by released iron ions [13] or on the nanoparticle surface have been under discussion as triggering toxic effects [14]. Here, hydroxyl radicals are generated which are highly reactive and react with almost all cellular macromolecules such as lipids, proteins, and carbohydrates. Since nanotoxicity has been identified as being a tiered process starting with oxidative stress, the oxidation of cellular components may finally result in cell death [12]. It is an important fact that oxidative stress has also been identified as causing DNA damage such as abasic DNA sites, oxidized bases along with single and double strand DNA breaks [15]. For the future translation of SPIONs from bench to bedside, it is crucial to evaluate their biocompatibility and exclude potential toxic effects. Only few studies have focused to date on the effect of nanoparticles on reproductive cells. Since iron oxide nanoparticles have previously been shown to cross the placenta and accumulate in the fetus [16], applied medical nanoparticles must be absolutely biocompatible and safe. Here, granulosa cells are used as a model system for female reproductive tissue. These cells play a key role in sustaining ovarian function, health and female fertility and, are thus closely associated with the development of the female gamete. In this study, we compare the effect of SPIONs which were coated with different surface moieties. The first two systems, SEON LA and SEON LA-BSA derive from the same coprecipitation synthesis where the particles are stabilized in situ by a double layer of lauric acid [17]. The difference is that SEON LA-BSA is additionally coated with a BSA shell, which greatly improves colloidal stability, influences biocompatibility and enhances its capacity for drug loading. In a recent, detailed study, we comprehensively characterized the properties of these two systems [10]. The third system is synthesized also by coprecipitation, but a different surface coating strategy was chosen: SEON DEX particles are directly precipitated in dextran containing iron solution. This enables narrow core size distribution and high colloidal stability by steric stabilization. These particles have also been comprehensively characterized earlier [18]. As an important aspect, we demonstrated that the appropriate coating of iron oxide nanoparticles ensures their biocompatibility. Uptake of Iron Oxide Nanoparticles by Granulosa Cells Nanoparticle-induced toxicity is highly correlated with cellular uptake. Therefore, we measured the cellular iron content on equal terms as in toxicity tests. Granulosa cells were incubated for 48 h with three different superparamagnetic iron oxide nanoparticles: SEON LA (coated with lauric acid only), SEON LA-BSA (coated with lauric acid and albumin) and SEON DEX (coated with dextran). After incubation, the cells were washed and the amount of iron was subsequently analyzed from cell lysates by microwave plasma atomic emission spectroscopy (MP-AES). Evaluation of cellular iron content indicated that SEON LA were effectively taken up by cells, whereas SEON LA-BSA were only weakly taken up and SEON DEX not at all (Figure 1). These results are in concordance with previous investigations on uptake of SEON nanoparticles by primary human umbilical vein endothelial cells (HUVEC) and by T-cells (Jurkat) [19,20]. Other groups also showed that cellular uptake efficiency of iron oxide nanoparticles is dependent on surface coating and the protein corona [21,22]. To sum up, the presence of a pre-formed albumin protein corona (SEON LA-BSA ) reduces cellular uptake of the SEON particles remarkably compared to particles stabilized only by a lauric acid layer (SEON LA ). Viability of Granulosa Cells after Incubation with SPION Viability of HLG-5 granulosa cells was determined using flow cytometry. The cells were stained for phosphatidylserine exposure using Annexin V-Fitc (AxV) and plasma membrane integrity using propidium iodide (PI). AxV/PI data were confirmed by staining for mitochondrial membrane potential using DiIC1(5) (data not shown) according to Munoz et al. [23]. AxV/PI staining showed that SEON LA-BSA and SEON DEX did not induce any cytotoxicity up to a tested concentration of 150 μg/mL, whereas SEON LA induced cell death starting at 100 μg/mL ( Figure 2). The rate of necrotic cells increased in a dose-dependent manner in SEON LA -treated cells. As granulosa cells do not only protect the oocyte physically, but are furthermore very important for development, toxic effects of nanoparticles on these cells might be accompanied by reduced fertility or congenital defects. Although only few studies have focused on the effect of nanoparticles on reproductive cells to date, it has been demonstrated so far that magnetic nanoparticles do not Viability of Granulosa Cells after Incubation with SPION Viability of HLG-5 granulosa cells was determined using flow cytometry. The cells were stained for phosphatidylserine exposure using Annexin V-Fitc (AxV) and plasma membrane integrity using propidium iodide (PI). AxV/PI data were confirmed by staining for mitochondrial membrane potential using DiIC 1 (5) (data not shown) according to Munoz et al. [23]. AxV/PI staining showed that SEON LA-BSA and SEON DEX did not induce any cytotoxicity up to a tested concentration of 150 µg/mL, whereas SEON LA induced cell death starting at 100 µg/mL ( Figure 2). The rate of necrotic cells increased in a dose-dependent manner in SEON LA -treated cells. Viability of Granulosa Cells after Incubation with SPION Viability of HLG-5 granulosa cells was determined using flow cytometry. The cells were stained for phosphatidylserine exposure using Annexin V-Fitc (AxV) and plasma membrane integrity using propidium iodide (PI). AxV/PI data were confirmed by staining for mitochondrial membrane potential using DiIC1(5) (data not shown) according to Munoz et al. [23]. AxV/PI staining showed that SEON LA-BSA and SEON DEX did not induce any cytotoxicity up to a tested concentration of 150 μg/mL, whereas SEON LA induced cell death starting at 100 μg/mL ( Figure 2). The rate of necrotic cells increased in a dose-dependent manner in SEON LA -treated cells. As granulosa cells do not only protect the oocyte physically, but are furthermore very important for development, toxic effects of nanoparticles on these cells might be accompanied by reduced fertility or congenital defects. Although only few studies have focused on the effect of nanoparticles on reproductive cells to date, it has been demonstrated so far that magnetic nanoparticles do not As granulosa cells do not only protect the oocyte physically, but are furthermore very important for development, toxic effects of nanoparticles on these cells might be accompanied by reduced fertility or congenital defects. Although only few studies have focused on the effect of nanoparticles on reproductive cells to date, it has been demonstrated so far that magnetic nanoparticles do not affect functionality [24], whereas ZnO and TiO 2 nanoparticles may have toxic effects on male gametes depending on their concentration and composition, and can affect sperm cell functionality [25]. Concerning female gametes, quantum dots have proved to be cytotoxic, consequently negatively influencing oocyte maturation and fertilization [26]. Micronuclei Formation in Granulosa Cells after Incubation with SPION Micronuclei tests are used in toxicological screening to identify genotoxic substances according to OECD guidelines. Micronuclei are small cytoplasmic bodies formed in anaphase of mitosis or meiosis. They contain pieces of chromosomes, resulting in a lack of DNA information in one daughter cell. In microscopy, they can be recognized as small nuclei separate from the main nucleus and are enclosed in their own nuclear membrane. An augmented frequency of micronuclei serves as a biomarker for genotoxicity [27,28]. Because of their small size some nanoparticles can easily penetrate through membranes directly to the nucleus. Here, they can interact with the DNA and thus being a potential genotoxic hazard [29]. Different concentrations (as indicated) of iron oxide nanoparticles SEON LA-BSA , SEON DEX and SEON LA were incubated with granulosa cells and analyzed for micronuclei formation after 48 h (Figures 3 and 4) and 72 h (data not shown) using flow cytometry and fluorescence microscopy. Flow cytometry analysis revealed no remarkable difference in the micronuclei number of SEON DEX and SEON LA-BSA treated cells as compared to the untreated control ( Figure 3). Whereas, for SEON LA induction of micronuclei was 0.12-fold higher on average compared to control. This was confirmed via fluorescence microscopy ( Figure 4). Vinblastine, which causes M phase specific cell cycle arrest by disrupting microtubule association (not shown), and the topoisomerase IIα inhibitor etoposide were used as positive controls to induce micronuclei formation. So far, it is not clear whether cytotoxicity caused by high concentrations of SEON LA is a secondary effect of DNA damage and will have to be further investigated. affect functionality [24], whereas ZnO and TiO2 nanoparticles may have toxic effects on male gametes depending on their concentration and composition, and can affect sperm cell functionality [25]. Concerning female gametes, quantum dots have proved to be cytotoxic, consequently negatively influencing oocyte maturation and fertilization [26]. Micronuclei Formation in Granulosa Cells after Incubation with SPION Micronuclei tests are used in toxicological screening to identify genotoxic substances according to OECD guidelines. Micronuclei are small cytoplasmic bodies formed in anaphase of mitosis or meiosis. They contain pieces of chromosomes, resulting in a lack of DNA information in one daughter cell. In microscopy, they can be recognized as small nuclei separate from the main nucleus and are enclosed in their own nuclear membrane. An augmented frequency of micronuclei serves as a biomarker for genotoxicity [27,28]. Because of their small size some nanoparticles can easily penetrate through membranes directly to the nucleus. Here, they can interact with the DNA and thus being a potential genotoxic hazard [29]. Different concentrations (as indicated) of iron oxide nanoparticles SEON LA-BSA , SEON DEX and SEON LA were incubated with granulosa cells and analyzed for micronuclei formation after 48 h (Figures 3 and 4) and 72 h (data not shown) using flow cytometry and fluorescence microscopy. Flow cytometry analysis revealed no remarkable difference in the micronuclei number of SEON DEX and SEON LA-BSA treated cells as compared to the untreated control ( Figure 3). Whereas, for SEON LA induction of micronuclei was 0.12-fold higher on average compared to control. This was confirmed via fluorescence microscopy ( Figure 4). Vinblastine, which causes M phase specific cell cycle arrest by disrupting microtubule association (not shown), and the topoisomerase IIα inhibitor etoposide were used as positive controls to induce micronuclei formation. So far, it is not clear whether cytotoxicity caused by high concentrations of SEON LA is a secondary effect of DNA damage and will have to be further investigated. DNA Damage in Granulosa Cells after Incubation with SPION Since micronuclei formation can be caused by DNA double strand breaks, this was evaluated by detection of phosphorylated H2A.X (Ser139) and ATM (Ser1981). The topoisomerase IIα inhibitor etoposide is a very effective inductor of DNA double strand breaks, and brings cells into G2/M phase cell cycle arrest [30]. Following DNA double strand breaks, cell cycle checkpoint arrest and DNA repair requires phosphorylation of histone H2A.X at serine 139 by kinases such as ataxia telangiectasia mutated (ATM). Therefore, the phosphorylation of H2A.X (γ-H2A.X) and ATM (pATM) is an important indicator of DNA damage [31]. Treatment of granulosa cells for 48 h with SEON LA , SEON LA-BSA and SEON DEX did not lead to phosphorylation of H2A.X or ATM compared to control cells, here summarized as "DNA damage" (Figure 5). This was also verified by western blot analysis (data not shown). Different sources of DNA damage (exogen, endogen, mechanical) can cause a variety of DNA lesions and can thus induce various cellular reactions including cell cycle arrest, apoptosis and notably, DNA repair. DNA double strand breaks are supposed to be the furthermost disastrous forms of DNA destruction, conceding genomic stability. Therefore, it is very important to ensure nanoparticles safety upon DNA damage [32]. DNA Damage in Granulosa Cells after Incubation with SPION Since micronuclei formation can be caused by DNA double strand breaks, this was evaluated by detection of phosphorylated H2A.X (Ser139) and ATM (Ser1981). The topoisomerase IIα inhibitor etoposide is a very effective inductor of DNA double strand breaks, and brings cells into G 2 /M phase cell cycle arrest [30]. Following DNA double strand breaks, cell cycle checkpoint arrest and DNA repair requires phosphorylation of histone H2A.X at serine 139 by kinases such as ataxia telangiectasia mutated (ATM). Therefore, the phosphorylation of H2A.X (γ-H2A.X) and ATM (pATM) is an important indicator of DNA damage [31]. Treatment of granulosa cells for 48 h with SEON LA , SEON LA-BSA and SEON DEX did not lead to phosphorylation of H2A.X or ATM compared to control cells, here summarized as "DNA damage" (Figure 5). This was also verified by western blot analysis (data not shown). Different sources of DNA damage (exogen, endogen, mechanical) can cause a variety of DNA lesions and can thus induce various cellular reactions including cell cycle arrest, apoptosis and notably, DNA repair. DNA double strand breaks are supposed to be the furthermost disastrous forms of DNA destruction, conceding genomic stability. Therefore, it is very important to ensure nanoparticles safety upon DNA damage [32]. Nanoparticles All iron oxide nanoparticles used were comprehensively physico-chemically characterized previously by Zaloga et al., and Unterweger et al. [10,18]. Briefly, superparamagnetic iron oxide nanoparticles (SPIONs) were synthetized by co-precipitation in aqueous media (core size 7.64˘1.6 nm) and subsequent in-situ coating with lauric acid (LA), resulting in SEON LA to form a stable colloid. They were then additionally coated with bovine serum albumin (BSA) by dilution in excess protein solution and following removal of the unbound protein by ultrafiltration, resulting in SEON LA-BSA [10,33]. Upon formation of a BSA protein corona the ζ potential decreased drastically, indicating the high stability of aqueous dispersions of SEON LA-BSA . As expected, the surface charge of the SEON LA-BSA particles was pH dependent, with the point of zero charge being just below pH 5 which is very consistent with the isoelectric point of BSA [10]. For the synthesis of SEON DEX , SPIONs were covered with dextran, the suspension was ultrafiltrated, and particle-bound dextran finally crosslinked [18]. In SEON DEX particles (core size 4.3˘0.9 nm) the dextran content during coprecipitation had an influence on the ζ potential. Formation of a stable colloid was first achieved with 2.0 g of dextran with a ζ potential of 2.0˘0.6 nm. SEON DEX show an agglomeration of roundish magnetite particles embedded in a polymer matrix. Synthesized SPIONs have a spherical morphology; Table 1 provides a summary of the basic physico-chemical nanoparticle characteristics. Cell Culture Briefly, human luteinized granulosa cells, HLG-5 were collected from infertile women undergoing In Vitro Fertilization (IVF) pre-embryo transfer treatment [34]. These cells duplicate every 48 h and were maintained in DMEM complemented with 10% fetal calf serum (FCS) (both Biochrom, Berlin, Germany) under standard cell culture conditions in a humidified incubator (INCOmed, Memmert, Schwabach, Germany) at 37˝C and 5% CO 2 . The cells were verified to be free of mycoplasma. For the experiments, the cells were grown to a confluence of 75% and passaged twice a week using 0.25% trypsin/0.02% EDTA in PBS (PAN Biotech, Aidenbach, Germany). Micronuclei Test For immunofluorescence staining the culture medium was withdrawn. After a washing step with PBS (Sigma-Aldrich, St. Louis, MO, USA) and fixation with 3.7% formaldehyde (AppliChem, Darmstadt, Germay) for 30 min, cells were permeabilized for 10 min with 0.5% Triton X-100 (Sigma-Aldrich Chemie GmbH, Steinheim, Germany). Afterwards cells were incubated 30 min with RNase (10 mg/mL, Sigma-Aldrich). Staining of the nuclei was achieved with SYTOX green for 20 min (1 µM, Life Technologies, Eugene, OR, USA). Once one step was finished, slides were washed with PBS. Mounting medium was used to mount coverslips on glass slides (Dako North America, Inc., Carpinteria, CA, USA). The examination was made with a fluorescent microscope Axio Observer Z.1 with an ApoTome (Zeiss, Jena, Germany). Counting and recording of the micronuclei was performed as stated by Tolbert et al. [35], with some alterations. Micronuclei are defined as: sphere-shaped forms with a diameter of 1/3 to 1/20 of the central nucleus, micronuclei have to be in the same focus as the nucleus, they should be completely disconnected from the main nucleus and appear with a related shape of chromatin. For each sample set, 3000 cells were scored. For flow cytometry analysis cells were trypsinized and centrifuged at 600 g for 5 min; supernatant was discarded and cells were resuspended via moderate tapping. After adding 200 µL PBS with 2% heat-inactivated fetal bovine serum (FBS), the cells were transferred into tubes containing 100 µL nucleic acid staining solution (125 mg/mL Ethidium monoazide in PBS with 2% FBS); (EMA, Molecular Probes by Life Technologies). The tubes were cooled in an ice box and photoactiviation was performed with a light source (60 W light bulb, Osram, Munich, Germany) for 20 min and with 30 cm distance to the tubes. Following photoactivation, 800 µL of PBS with 2% FBS was supplemented and samples were transferred into 15 mL tubes with 8 mL PBS with 2% FBS. From this point samples were protected from light. After centrifugation at 600 g for 5 min, the supernatant was discharged. The cells were resuspended by moderate tapping. 1 mL lysis solution 1 (deionized water, 0.584 mg/mL NaCl, 1 mg/mL sodium citrate, 0.3 µL/mL IGEPAL (Sigma-Aldrich), 1 mg/mL RNase and 0.2 µM SYTOX green was added slowly, the tubes were immediately vortexed for 5 s and were set aside for 1 h at room temperature. Then lysis solution 2 (deionized water, 85.6 mg/mL sucrose, 15 mg/mL citric acid, and 0.2 µM SYTOX green ) was added quickly, followed by vortexing and kept at room temperature for another 30 min. Tubes were kept at 4˝C until flow cytometric examination [36]. Tests for statistical significance were carried out using the Student's t-test in MS-Excel (Microsoft Corporation, Redmond, WA, USA). DNA Damage Detection HLG-5 cells were seeded at a concentration of 2ˆ10 5 cells/ mL in 12 well plates (TPP Techno Plastic Products, Trasadingen, Switzerland). After 24 h, different SEON nanoparticles (50, 100 and 150 µg/mL) or 10 µM positive control (etoposide) were added in 1 mL; mock treated cells served as controls. After 48 h and 72 h (data not shown) DNA double strand breaks were detected using Muse™ Multi-Color DNA Damage Kit (Merck Millipore, Darmstadt, Germany) [37] by staining 1ˆ10 5 cells of each sample with anti-phospho-Histone H2A.X (Ser139) and anti-phospho-ATM (Ser1981) antibodies. Samples were acquired on the Muse™ Cell Analyzer (Merck Millipore). Tests for statistical significance were carried out using the Student's t-test in MS-Excel (Microsoft Corporation). Flow Cytometry Flow cytometry was performed via a Gallios cytofluorometer (Beckman Coulter, Pasadena, CA, USA). Electronic compensation was used to eliminate bleed over fluorescence. The data examination was done with Kaluza software, version 1.2 (Beckman Coulter). All flow cytometry experiments were conducted in triplicate, and the results were averaged. Microwave Plasma-Atomic Emission Spectrometer (MP-AES) For determination of absolute cellular iron content, 2ˆ10 6 cells were incubated with 150 µg/mL nanoparticles. After 48 h the cells were washed, cell lysates were prepared from 1ˆ10 6 cells and analyzed via Microwave Plasma-Atomic Emission Spectrometer, MP-AES 4200 (Agilent, Santa Clara, CA, USA). The entire iron level was ascertained at an emission wavelength of 371.993 nm. For calibration external standards of iron at concentrations reaching from 0.01 to 2.5 µg/mL were utilized [38]. Conclusions Little is known about the effects of nanoparticles on reproductive tissue and reproductively relevant cells. Gametes and the embryo are rather vulnerable and are therefore located in a more protected environment, but nanoparticles are most likely to cross these barriers, depending on composition, size and/or coating [39,40]. As nanoparticles are already being used in clinics or in clinical studies, they will be part of future medical applications, especially in diagnosis and therapy. Hence, it is of greatest significance to ensure the safety also of reproductive tissue. According to their field of application, e.g., contrast agents for diagnosis or as carriers of therapeutics in Magnetic Drug Targeting (MDT), these particles are coated with different materials. As we found no uptake of SEON DEX particularly in granulosa cells, they are considered to be suitable as contrast agents for magnetic resonance imaging (MRI), because they will most likely remain longer within the blood circulation. On the other hand, SEON LA-BSA with very low toxicity and low uptake can be used for MDT in cancer or atherosclerosis therapy. In this study we demonstrated, that coating of iron oxide nanoparticles is essential to ensure biocompatibility. Future studies are urgently needed to guarantee the safe design of nanoparticles, especially for cells within reproductive tissues.
2016-03-22T00:56:01.885Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "e67fbb7582476132262013ba1becdcb19e3d246a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/16/11/25960/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e67fbb7582476132262013ba1becdcb19e3d246a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }